var/home/core/zuul-output/0000755000175000017500000000000015135527557014544 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135533560015477 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000214130115135533411020251 0ustar corecore vikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD R~vi.߷;U/;?FެxۻfW޾n^ /ixK|1Ool_~yyiw|zxV^֯Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzCwS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזLwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&AmjX( vb_DSVL5(I83+ p|*z*dQN?twʰ6IyEJ/|.1٦@ڗJQ&շy»Q}U-rbkm@ B̠YkIP0HM9AieC" >E }qŗ9>2Ȑfk ޜ6UxTHzy &\@yE5]2z!YY5<|!" ƕ<:*`yCBA6Z "k+w՞ nd:_3/8tux}=^:&`KrAr=>I+|HBa_"\\0F)!$SVّ1{YZ&T*do>nsAxr=YH xϝA (5n { 1dkjG V'LZy_d4%0-eH##BgJ}L%C|COT"$z@OfoO9_܇ߥ+7{r3UCT{n$ p gӻXuЎ@* =ثzZs6.&uztc5+T9CKMβO(#8udTҫ]T.|؅lv5ܞ2M=BfsXӞu9A9f Yu5S:ªXeF= ad^pEgȒwiuzz ĸ?@9d5=\HWR)H ݪ@{i &ڥ "&xB8m钄%$IqgzV5$1n Am/יWKY8уd7O,`Y7/, 5;pvMXKM]oPxG>%-`E@* &%{JׅӾ|@K`w ˨5\Jfdr7y8J)3xhƷ A5xN =k$*яo4?/,xov=-= j@D=9%"=VmڣC9_)ldX{Qh4jZEb"o;](ePd=Pd#EPrȻ{ayq (x5kRs)KP_r2"5Vzc*_ NzyvYs~h@޻Dxpw\^Z)B2oBT޴@Jaxm`A1nz~xAlᑌ,d(꞊QQe~x{r61s0IH "!yf2>\m?Ln{FZDp>Hp<%jy":DzkS2 Qzį47Iۃ)UK²v㶝տFi'5X+PԌ'dpv~2t}>~žZ݋\h{iA{ 0ԃF<7AYsNO߫G=f6L` "(ཁ %:`^ˎ|N`h׫ /F:x zjٻUț( xue3`"k {\Ha-Xlb:H{NZ:CKTqٻ$},w (֔BU⽼RbM(z$f7ڣdwz~Aa=eON~[hiAA{7'Pun߹ƪx齳oK,`TYg?0VdաBFM[2vE&2^~MQ%jWʘZ֫Kf}9S@!{_(wK:C2 /tFZ]b-W_U=,NY&]&U5]9yR,\GKIiW'go۷8jsg;cgjlWӫXgW{o6.q31'Úpswu3'6w/m}|/} sۢ^|8T|p4Hajź X*eg{i:g ݏ*LbwX*Zּ9Zīq'6y[/LT/ρ[{XRb}*(u `;^Lm)dz[{S9F{ro*׊;x [{ PHl ڹlǓ1V 7yk Z5>=kRђ:=kJZ}9yJ,Si2^⇿)g~YɃB*_46Qjܠ7$ Gx1 l`{{J(hjnʵ&n+=x~ػ.E9| p4YV `JMƻ| ; Q+^K|h>4bj._; RTJ{d;IT-ӧUûlYBbE% ;(8Imhb#5!@7κR+A57lly]Fj{Xj ⱽέſ'=T2YCu6S 0XtQU^oN6!;)Ʀ(R"7i 2Oa՗Q?z d8:J)cb)b⎉2a2&S>-HE`5=}2ݔڏ5uO'F8sI@6<[q8GåVXW1~T/'t&澇BoX\lhf~H ~@=IWTF+~MujP$qޓ{"3.FIdKG Y)Rjjio?Oh1+|lV-ϷQ ۊ*@:\ v ˼N曒{bHypy}:p|oҔ|^Ώ*XDy0Q򛷾˲=*y}[xjnE|/it(=`V !!<.e]2Q&\QPrL.ǀ9^LzS'|$8I3χ#{g&ζ+Ҥr^ ђ{L?z0US4yBd?>WDs ̧)bXyRD `MHVuV_K2k*acKxuBG'&24T}Lai 0Va(K#ӊ!,ZDx^FQO*lם=!4ӥ2 ]â. U`V%`!c%؍ʨTzqKh! c.}.D>)d 8scu,wf2AU9c>ˇXfqT\X)QӅ#tӚe~=W|X-sJb?Ug3X7J4l+Cj%LNPFx1ƕ *0W4Q>@>lW"A X5G-n}.8B>NOI[ 1,j2Ce |M>8l WIf|\q4|UkC.r`˱Lϰ{ xr)~l-ɩܿ*7DNӢQY"0`%޻I>rd31V_ѺUibap5k_JA+A.A~ C~`v[K_aQ-Ģm9٧f q>1`nOF7/.7 !ې: Z%ƶ(f갱/p  |T!|ik3cL_-AnG i\fw$olٝ:^Izq) ٽƎDnٻBc5Lt{3#iyy >Vc11*?x]aU`J/AcL~|?yj8TR#s&Q.ϊ/Yrx+u6*27fǪC%+A~*Zآ'ѭnס|< a1sR T5ғF<pbi\X%d͜u-ss x9| +ݴ98V= %æ~m 6S+S)Ea7*g3IC~LG,?_C5tBYpm_ `7Lz2r~MMp!~?~h?lk5I_,?&I}qcU~7뽱Ϧ,ce7CuoDT6mn+^ߋ^|Z=k-on+sn*up+t\OW U-C_wS!|q?E-S_w$-%9?nwh{R 5ѭ^9p -h2 dֲ 1"f({[]Nk"-KjLOgeΏe|Bf".qx)֒t8E)J\8Ɂ,Gulʂ+lh)tid!e³5d ¢[u|K"kP-%ђ5h ޒpM0[|B>+q&[ڲ&6%%<@fpѻKQ 1pFP=TU?!$VQ`Rc1wM "U8V15> =҆xɮ}U`w۸ہ#t#|X!~Pu(UeS@%·b:.SZ1d!~^<}NY aBRoJ@ѥuȑz.# 3tl7 ]وb Xnݔ[TN)1LjnۖF+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~D`Ž]A12JQ̛GXL:EUپOY>Wqå} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v1A*:W?*n{fw*B#d[si$U>tLΦÔ+XGX߇`cu0:U[tp^}{~Yhi+lU&Z!֨řoҒ"HKX 6„=zwҌ5+P1;ڇ6UE@Uo1^'CplM)0\n ή ?Cֲq9H] bX9^wCο -nd+pHUgRy2ЯG\ B0)% #>aJPUck\Ul'F瘏" i '2S-55YCrE~b>|Ž6Oj~ebIapul9| 3Q#tUqScxTD7U9/nq)JYctmc nrctVDƖϧ;INOKx%'#t+sFUJq:ǫf!NRT1.D(7.8Q;ІWT-aQçKnͦY;c؍UXE4S/8 7:" `/ +'\ U>]BfL6鞞,+Bg#[3`pO^>eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%nl x٘n<Y^llmb rY״͖jiTI!jp.Pٱh s1:W_IaZf% >eb̯[qM!:ɋCBcdWu-jqX([(6I#z)2`[ ضزEM_UA| m.' L,C{!./Ep.h>hBXULi7.1-Qk%ƹJ4^ple;u>6nQe UZA *^Wif]>HUd6ƕ̽=Tse+ϙK$s`hnOcE(Tcr!:8UL | 8 !#t Q7jknnJ0ܽ0GGL'_SGo^ʮL_'s%iU+U+ȳlX}i@djӓfb -u -w~ rCplK;ֽnlmu[`wdй d:[mS%uTڪ?>={6])|Ը>U{s]^l`+ ja^9c5}nZjA|йJs V[~ۗ#r#r[i# zLdMl?Uc/wfڛT'W;d9OUo_ V:<}xZhT+soY5ȶ4/.m0E(G2[+G~H'5i?mdI/JlKs1Nx3`FlJ\lٚ_$%KUD13.v}U]KwU"svc(Ƭ"U]bD0VHdGu1ʰm',q\!*yu%HcƻX&$MQbJ"xw,XBƏUXa\j/".)GiUQJr|,|ܟ.8(Fy9yZS?^,we?<'b(c@<82,o<%:%gnz/{$ m|yv د+Jt󺸲 bEqlNq_W&%܊L ^t`m0 x P6m' 7fXƷ,k-E5suۋl}P;g:1;DLM`|_lrZj%ĈaM FL?~h{nYdE.WY 璙n`zp?"n$ ikOoj/Hs<N^>''a^>'PztpzkehTw=CsmkO(l<"(,--MڗZ ^ ZC!Crnƛtj֍kúkݻ6ԉkyR4WAQZP0E`u0MϿe؆؎td$x}e1޼1>x :)DSkT=?8%8ZFS$6a E&*ѨLaI]'Y,YjfGFޠ^aˤ~Wа ZـJ2 ʸUD%_LK3xZ2.*x=]/mʋD,r]פ"1|$x-0M!비2j>w"3x9jgD,IE=-Qv]KqHQ\Z>I(a.D,qրۂ9# ǁu|^;< kD_6hچ5S,E gxfE\NrV Q' >>2]lR-K.HaMʜu)cZUԛ/ɢ"(g+oYJ|K"k]z޼:D82}t$NTUE%`E80[o"&_?gtg$ROJ&Q]~ ^^Gk_YZuݓ Ohh qq>`d%NΛ—4lX,ݤ%_HK_Hdc^sl}oj'핃\5kr4K&7NQĽ '4ci1KJi UWy,z8x=<פ<ϚD~d|QO.Q@70:䓨8%9(wPPӫmJ E|?ʓDPRL8(P>_ ^:ϧMQX\_,mwBPڏ]Ǡd~»~, ܜnUrx~v`_Kovϼӎ&״W9<,:;cDPg]2 }/AP쯹TA7O-*;)bIr7|Alq2([&g`SwBTu)OBa!i[¾ЭD!_U&nUU 0y;4Kt!6$͋ݧ_TүvfҨYXey\^1 7=u( y)5roU.13( gE5~[P!Em u\y  @T_DΎ4Au.%q)PX Xۏ/P1ۄJ&{ڶ+A]jX ^q*ci<ܭy.p?~ї6_?f!n)(O f&<MB ,xǝL~ه+-o[¥LBpA՟8a&f]~ݧۤEb"bE\;5꼕b/E]ߜt+rYY7¾uޝ _{H&I]K.])Ċ/!\'e! CruD y[Z3h c 羼wUy-tcvSgb;-pA* Pȋ9Ϛ<N]\Ugy*?]@D}侒?ՒcR.wdT+aƠAeu F~JRi:GAynz.ؕH<G ,Y[Tuw!# ߊ66S:z? ӸLL&Ϣt'|*=DrH 8ΊsT]6#r%yooB|yW"mM;((C],kΉAqT/mgrtL_EBjmYL~فZv;ۅg ڶ䷷Mca(%u|Oa٣ԒDPE\$ /t3RQ x|$口mNIwIK$ȋ+Zqu|Dt7"N7ف ϴ[/9>XR"EXК-%v)$ Z $e-G5]ן#шbEh! g2V-G FNfB䷮=,olψ$^[gld| MU6 @mGRX`..-lG EU8>‘>J ~El;G%` eVqԶҏSuUIC+Xj֤0C{ {= 9;ǚkXxs4VGɤJJݗ1pp|r5)8n /be7ov[= yY@"P799 `En\y G c,o:[T)C^kATj_yw~h|A"lb=QC$RhܕHKS7WQn`׶*3Yj5`C kUH/!(mdb2tqy  <*ϥ#^?Ol b&#lلs C:t(0{QKIʲQ#V'1>}հTYHci8}IGLԞo%UKk:aq3{e&.V,+wlf2u' ٷɳI =Xf$ei+ rL<6m+L|#A2\;`N9b2=#JoE,NВ ª*P,oYI٢wcSnvU(MVJm@Y w-*gҧ6UňJUUζ|< V#CX7XRU~C'j:0[ftW}%WPf_\CTeKjdX /!yX*S~X0mQ4QkXU@,gA\I@i1 Pl]Ih`Z]C+jQRm_\ATd7<2`n_8;afO%Gݒ[c}+~W 3%=)γ^&6WP )*ߴ=HuH}_[`9JX7@$o.VW2 Ad6Dv,xYiu`GӒ"XCEP ii?8f+I5V65$E9y ǖX}U3;})Mq9AR7U^WSJ&1%;0%vu҃hKE!>}㿭fkj~橯Lk?nODy"8Fvʦ\c2ćb[!knrje +,?0TZ #!kAb9Wu>=KVnM\DkKkkL+.̠[Ӥ{:7 EsL7 ='E#O n88 ]ݎ<˴*)v7!5Zݬ@稨uz*%-DŃj CDJm:7fu{mO;ORlE,R92-!,Q؆:x!%3%kbYIJ0Nw@XG{o3 Zܪ?>Z(pg3z[Wwm$ErVBBGlKf CTiS2Gcp]^.U39k` <}ؤEsӚ8lkW RͧU-o|eV X:n<#Y; IsuI.o^j7|5TU.*DW(*7aQCu+e 䑣MZETK3qT6ZrMU1o H@jհ'k[6[fhl>}M"RDŁzsS8Hvs @7R?HGA7iH~j[w CFGmDS1)p;Ft,@m>6A4zv-ua К:M0 ba'jل =̱(DeS eU:_LkpB,l*,fLUSb1p3mo3/6]^pk5P|Ba3"adQѻh/vr)?{Wܶ g^>|o'i=?;nox(H)](Veٛ6i$vm%a:&X9tB{5qᅡdʇJ _1xsfc~7dۤŽs oX8dbT8%Pͷ&?$A0=EF` lWQ[Wα!g8klTS OAϳy"$h۽!;njmY9 29~)Ѕg*JU*r"|Q^G'4a7sY^Sceۧ:k?VJ_4򡏘{!\_ߚ"':X(=+dzIX,OsF2Et"AAHwWFn78K'2 flz4TB2HmJxb0 k=nWx 0bgjnbQOB8y5wma3wmږ; NAd'"(̯vlA 2{#PMaߒR.ߢNjAЉ *AEp0h4 OcAGs&- fA`^ C[t`-7UanpXc{YiwݛC8.3Eb pQ1 k Y8AйOх3YpSz {ZOܣ/E a] +Nn+ໄv?} |qanMj8.睋ptãCD68_:m0y[%|D/O"֪0 k+LxuʝDx.TT0yUs U !7>+Vvt~\1gEt$BG>23j?p_1 0cE`J't+X˜I̎a&mnuTj _UPaZzֺHZl+/jG;r;D~f+l%hݐX+u wYր9"o sZeR >1"oa v8Vx?֤rj i@Ex,m2MgT,t>.,uHMzM109U=_ _ZxU ,0ÅmCoQ֝JV `RAK6S߹>\`=qXdA\m%fyiV=JA-2=`:L;8>EΒ,?|/A~^:؀^ߏ/o X&5tK|Vw3Y`֎4o%2t W78o>YxT|:ѷ{8Fo4 ՟nSu y,Vt{.[5E6o1cW|NjXT O݄(x,A2En?r| Su:t$YB Œ1kĝh#Q=ߥLXrQ쾂A{8CP\)*hc֧f5{-tFk}KsknKkt}4`qjԷntQ/E ُo fwHl(Qah*p`c y"^5rw'UN3qGQ4WTe1.jS8H#r_d k8fXaD@lRՍ}?I(G7td8fa6#y Eb * KElE!0 zY* ˁK},-*hH%NE Y|8A(y-ͅ^[ZB_bGٽp⒮ᖰڎ< % }nʮf}0Zvpy_*{,d"=^ě><SFaRzkb+^F>^On2J;Jق䝼B>'3R=2o=TϞp")cl{Px}X+@Ai2izy<|گ JF(}$tBt)ݥO#:>Wcԯw;v}L>oݹ.[@,۝V4R#)e;@;uI|P|wB$@NO#}$zz;N4BG@NO#$;Nh4BG@X'T@؝P4B# K ߤ% e]ψ7/Bf4i0;et4OծtWixt:h?r...B 1@1WH7,e] vL`w'}H/}Br3L D;8:.t&E tOтAyZH.-*S)4*smFm5ƠW"э '`+&FB:/D P<=NbT. {Yڧ?qY!< ]t,n#n\uAu| Cح.Ǘi;XL"WoEut~3ƛaOzHl 'w/(B(,,MޭwηB pZ=upH*[C0(&i,1tϠЎZPBo)Tikptq? BުGJ@pjݲ"[҇p:G_x뮿<rG qVp"Ձe[YѢzɭ3[z3gy,J_<`pϚgzzV8[N&w| m1۞B{r~)2&}~Q>JTm |+.e6T֥4p, M.#nSMC(xlh<%\L3 YGDƓ(j:p䎋!L[k~@ U{osiv Aʄ'Clܳ`7F<T =퐩:ix @9Y>g`%vj8AV ` S2)O@z!=`EQ91Keqi «@Skޞ7˪{oᗖgfa>kΉF ݢlYEUԋ3*&nc;ML:I!sU3!X#@ r.G˽MUQJ64bW;0ryt ,N{ED-}#%^#X(?L9CA12%ʳؠf#ntDtxMPUb-;P/@ј37XzQuFGY *CLDf/<ͷWVhCcl4)W0kMWWSԧj *# a@TYeN/S @s`weU.jZ+^{`!e|6EHMz e%$] S/㉚ګ~&mrz S\A/cF!ka修(d'_tU?$RٛHN75aZǞk?eQwyIa ovݬ.C9ƍA#1(0tX@yG%:chUH;hGٽJW~΢(c?QFT 8kvA/J2gVO+4:f7T fy*?߫Xfy!<3C]fŢ4oH%o i6.8N{` fOyIH-α$]Gyxf40 k<8K&f#LXAFz3jޠ<Ud$ia)hf${`V+j]&r})g!N|9axZ蓍!h,tJ+ 7 =x[w,QA])=}PNm/8B3xAl󟣬;/+YklqVYk Du\ds0& LrGڌ5Q6"e^+2鸠aNHo8wLXw taH9tYv#@]:50N1CՐaUǞJ;uA-([Lұ#%qp(F^ ̶o#guw8n6'r,atZPFgw,u jw.Väw).d}ރ靪ZB γ˧zR!ڽ=4;GpoF3S/~]B%Ҟ-}hSt`8"9>ɷ/olcٷ8zz:f?9&֏ӏ8Et`ݼ\Z͂;6ġ;.Bpe>~9u.x~L k?+q=-,V(9sc'fʴ7tAg&k8 $V?ZQM*&x(ZZi0ؑsBdzYih6 |%8*#DEHMrT}B q8b$r]n 39ǘD1Y*\u%g +pCJZZjˆa>v0n1;@PHf)ô%b}ts!pG#h2 Z=Ψ /̴L¼ו(Wp|+b^a$|#N١q*0_FbQ )шRNcĐ_^mcj)VRY1x1E_CDy;6<]psop23 7.:t1 4䗯lޑƞӦvLV0%™ 3s~ 81TdVLWrnpH:M$8\-mbtK B ͵^*ҳH:{!јr_ P[ab0RTˑqri̖(=]vo]WIۯ7@RA`*djTM0Jj[JٻC nv[(pJVk% 2 .Pш[#X|ܓ?.˓>:7ZY.*uBP٭~N5]@b~| rփa0"xQ!3I5 N6*!S9#DJ:"mBҙ ~%ьMg2齂'b7,:n3F[=LA`I==$=duՊ_p|Y׌P0ڌV`ϻ.%%#K B;/$u8JSJYen`VCZ3^Hpm\5+M@xTfI^82̳FnHbkEաhcdv,QǮLRs2L`C^EO('!bfפ2 c$~6(y>52_cb xkM,sb]oWyGrp8F࿅px!HẄ.!A1r8+$rŚ31%.![>L͹x5|c,.uc!zKlgȃho*hC|zaRu4 YR4rdm!ۋ%qp3^^WW}[aĚ`P#Fy)?ܬIp c+K`HK,ИHĪ1p|_Q8Oc%r Qsjڊ-ۤ=[}dm?wt2hF:8(8\0,x;(/kRtwq#?`z;X%7/q#!rLg.cF Ż>:8<|dj.r8J4GܿC?<,` 4$V96Bc)de ]i9IW~Ww ;1g` lAϘcw[@7xdpf>pR{~=d#F>*4FN~~!qiU{BÖy`ɤ42e- &F^!xR\V {_[,(7 e?鐃98%P{5C#M~?RO!.e:~rU ;f9OilwHn| Ƹ ^L@8tPrFeFH:u400eOsmW7ϠI&w'5D?fvRna9w,ccV TbY,K0r.>c3,kކ)Щyk`t/ RvM9=n{nY3 }*Cn p12BuOYI˽'|dY3hyzT|uڸ Eu*:E ex^cU.Lx5،d8v?ݗ49p.𐧬 cI0=u|TX# #Rd0wЋ36QaRF&j|׷m?4%e[1K - ~n=?HLJp`:;/1Qd^&^xiEKJʃdL&ޜ9'҃ 2e'Bgߟ$_ FNyޱgw>'kYjw;"U||=r> /PK%ipz]Bx` ȷ PP`0;AZTT:._ @Bg;.(jqlkoma|^o0} mZ9,k,E^hɇ&h ?Oa:FzYt]Q$QTiU`Y{ּ8Onrx).HsR~G$7 7G:{>kÀ7A?)`L%#p)[w$1&w~&{ӛu\=Sx40ٴ/h~V=D`?W_|}P~4>L.ùpЄ1g|So{f<&?흛d8' |S|0] _ς2>-Jmxo9G!_ u9I&vaDN=Ňw>\xxwW@\ҿ1a%%`?~1{koj|WJD)f\`FQ~MN3>sO,QUs,<QY jIWC( K+-q~OE;l4E&~|NԐY=Z(R9PVeRe\03yyg&@#ڏ Uߋ8 @tlz #tSo!Zq*re֢iR{Z`u@V<0Pxȩ@̻{7W}: PR9_J[&gNg\Q:s9Nn܉3.&b$ ̒Q|\jd"V#K / q>r9Фi/I-ٮ|^9Z)y8U+˿o8\诳s&PϜDd΀\h[2g@мLKQ%{N$aF2&q  äФ_("#ocb:X(RS@n`L60oL`eL%nrR9 |?cϭEb-0ΑjqeW{24^Bm Ux`?z`uT[~Lq`J7r - K nW iS mQAVZ 1rE LԌi٣߰a~:l4fJ6yJs"1͔PA#, W΅٣R^sѰΛ܆tO"ZrEsbfL+x@'7ɗKF>PA|Bʾ$⤌8Y 7wr0c4WO>y!t[EzdWѵ+ʍj Ti(Տ|ݻrTTq0 5Jp n[U'~iՅowòOBAT; 8 ]JaJ w2{_½' /W=%ܼ:`L(?7vS6#x}f\H^yFriP8JMN{rRg>0DتY=3f)rN>)Ȫ>:#+Q hX--9J\+8?uuޱ-w b}"ٌBȓ2[C}RMR^@X\Pl RDٌbKʼn"%a?rj)&GcDŪ JH&R@ Z-KDm`Q)֦@RY `j5Yv@*x~έZ8c(#d퀾|տXIRZ\5'\ ф*8u|l,چPi&^-gKtv(xa*s0kFj:@teh[@_bOb3PXH 7+B˰ݫW>LiV9{!u7SRܗ+&1R6ŚKyEvk=vs!\@lع|U_[[}9%\kmrQtO3 Z6*9vVa'`_^z:J|U[΄ϟ:~OS݈hf_=ǁVկ~jKF5aԧџ~2)j@f}\fgnjzuz:H 7 㰆s`,h3<h_?o 2w៟h"8q/ϓt:3"b8G(!,LJc8I ,d +ZAu l%8^B1z]g8H7?D@pR3C,Kh05cg> ;wߞ7 8aEc8{VlRH* R.}Fw/pi&<:dF`uuߞ38A`*gP2=J1q&LQLrTaYoIf2E!T:c2a 9@8<H=~`ܞHb٥vE^Uj5di*pjH*0>p ¶P ڀ(aaM|{ЌT}nXU3gw#8sJ:1YGJ\:S)s(8i[ճ5:Z ڲezeQF%n|ҏQoQ攴&e-I@HCz {ICjdփQ~KJi{N> >ͭ•wQ M%}Ƨo?Q7 *{M=V.>g]/'ojsṰ dB> *r ܲ3XxL jϸ83 gi\ݲGy7`丸k] CoOmot˿zoFX瘈gf5rfbAr<)ŪX bÒ/7tH4ouxyѯ~^*d~41] _)dtN"3`ne I>N"p9rڅ7%w޸k{ f ]@@Nzgwo!OI4d` cd$!> #8j_f)B!8ƒFI! w?[ɔdy4]i8p/Wvf7ˑRq-b^Y$++kH v8xZggwBc6xWٕlTJCRo Aa|b_F@] Noa t.ڠYJTXЬ]MxWBMNǪyKҧ^JMqiO,,GfJ{/@\bԇЊ7YgPRFݕ@6( ɱhQ;Pkݑ\ՖlwVqmD{|Sc[aʼMXU5M@yøM9BZ")p!8Xb,lV кZ\^LƤ|U0 E>Cڹ,8 @+ YvyZN=0[FZva$yu y [I4fmrh`ў\\E[HډXU`.\7\le0MXB @iPq+j$ Ac=Dm%>LCmTi9Qs'D=H5 ֜6@1@nPTceV1-|e\2ŠqZRt֫#aiV]uO~x=3oT{ૠ;*csa H%MCvϚ"Zǹh*l Q֭Q\*ڣX6Fbo%i?{6 aE(!\v`f$. MТG8|nR%K6)J2|4]U N) 5$DAq(^0Qy/0gOE$A(BmV0Π !)gW0 )8\ %-) e@cҀ0Vi%!HQ`p4(&&ijHH`>D-Ùdפ{5HyH #6Kc7)[b '"iC)uB݉#!̜ Im d$IBh;[ظЈ5AZb?Z #ԕш5s!Tz֌v<4#W+J4#u8'kQRLm msXM@>|'K0aBƉ1(4R4R0ǡͺps%"`(qݙ Sv<ۊ#aֹLI"9b%жf0>xS$DذbxD%t( ъ萛$ *"ȩF(a&H GBL1b#8T@0% sMh,)(*JIEhA2в9(xD d|!bN hI%dAE!iD ID5D;QQ)%\v{ш`hF.9 x+gas8sgMμgEFGƂUV`,ކ1;0y|lGx b;mb3ŦAꍧ93̴!I9B3dCrqP "G  i".&"MtAq$?xc&ng[+) ALr1b%ZMqi􈴁SɚP+ KW{31J${usA f€BIg CCbf$92woQkF%_1ɗ`tڌG];mEӽM.DFXmFˊF)$;I> k3r%Tէk*y1uٓ6blK(?VL&ߤHRGdR,\VZr@ píO"mb =2۹UB)G3,egQZiِLۀ+·L< iLꞒ`ȤuɆnO!e&t{*t@=Mb+q||M= i‘+P}F:Pk@-7@f6,cwTbiQI(?qGVױ;# }a@e*FBl3̷Zߍ|*D~)x7˧ٞCcUo~_Q6̦Ї3},H3h ,wu\y3G>UH{[bLggˮHV\_Pb/+ytRM`{I+{F;34K*AGʰҩUG6k?&=,T칝eX#n_doQO/}(fō}3 RV_^#܍6=Fl !߶(0ݯ@nsG{ VTmB&P:?F!vؐ[=[3k03U)Ixs&|5kU6}Y (# ׄLtּɆnPK@mwe/5k5s  xXH08<=% S[2h=MH΃AkoH.6LCRլMjƲ+ \t_yw|YCr m-9׌\ 15ܮM%lHB j@kvC5a>ɟ{74h3..N+Q߼LV l,PAH@ZWhld $:L}}ĊS xK+x:|o G,(;sۃ;(n~ؗn]'h[T$ ?~H]XDq?p վνq׳x_aWry1AJ ,EyզX% 7YL<7uY+j+:cDJ5{0ңjoaUT#-ŏMmKnj<`\Z|rInX]$wҭW臕}dI6_mR)WvʥEPrWJEuzESD{W,X-@zZA*R,Gz(BoZi@[-zD" Gd< YfIoEeIroze,?`}㑹ʠVo0XM=tCg It)(V#'߇s1eD= yXI6^\ K4 т:7~3;?zيN+:9'7 ˳4 g(xX1sU=_ (yN 2|D&c'4i x3Dk_uCnpr_{տ3\6\(~c >W!d"ulOm0 SMȄٷQ6oԳV#yob+F/F񍗍;i 0+n 56| 6k.x]hzY2~#MeL]p+`9K˿r$FE؜l"2XL p͞ u` D1JSi嘧jR.O* ݢ˓)`?,8~C v0b$^}m:UdK'W]AXs֯̋3? zx҆~\=^рx2OҼ y/M\{:%D<3pp)[GݫUqMv!qNJ|Wݏ= Gc֎|w^3lEr&r+7-hw`cSp\uX2 nRtrQ>8ÜXZ-v*ftקOth/쭔~pĒbb,Z^<ƗLvGG*mf,O/g3/r#h0Hcছ/h^|x-CiO>QتMffD|?~y~J\=gt{>}kD6\r鄓NQ)U%^VYsZ÷% &gѕzȷ{jpE@8 bQ'4%#( 5%!&&A9?ׯo.NjA M|(Aq-6%4SJn2 s2?O4Mv燲mYF45Ο;"72˶uo3vz߶[#~`}(U:6YU M8|gwWp,&n^%Eg xi/ظy#+n%/ܕMqK>J 'h:·m0pjs騟;DK8Pz*^GrQ4묻mh?67nt6_~t ^(`w_ۘQK*^-kGA8ȲUvo-.iٶö2kNiLd<В> ^ztOڂ ^9{oPmP{|f$XOr,@!Q?C>{lU킓cgsL(ADupZEph0?B">kV>6s/Rxº{_vho,AS a\!-OKf'HvZ2;-NKf%ӒiRA-b[[#:A'4JN0rk' ^[ѳ#_A=ɉIwa.+W:SFνS G2LTlO@h=ҜE!HdM4L͗2!TV d+a1Zq&>]+mT꽡Y[pF 8~HI Ś`{mfF@6cx!.89P%Ka2NQL˜$X06%Ipڂ5_vGQRl.q}k{0'_]秡Tv9cecJN{>rY.'I-U_? K`kGG&R@㘆0ØNq 5iZ+IsrY:qO`|sP9iĀZ9o}+~,@/?RlUUc?!HL \yq&| bޙEG/ M|c /w y~m4aޕHr$߇`4μ~V:+:3^EGTYTT|t- `7>YTͭoۿ_]po7Գg%F;;ɤbNrqI{TgtH!xA r~-+5u:I7f]Q:>_^GǏM~<_g =};viUN_k"ܼ1{{:{w{S'}OWT"yӠ6/Y+-a^YPgxMe׵/yey7Oٗ4e'v\{˻mN?䵣#51¸ Dҡ̱y-Ça  =-K?5$9[DxͣgQ{1[0ćl'[5-ΚFJh2) s䌗xQ{1[0̩o5oX DžMDL͉X ?EJǵ^(r> m(2<:{mDHz% PrMl-qC>^`j/a] 9,3Ӊ*K$&\L31N|(NvJqX.p<B&Hu":chA '5B19CȨ Zd)#0bh ćki}g@2I9XJLcTQg@m ISqna7`nkPxz5ink{>%dg$ ,'-81M!'3#(؎6}5W5ۿn5i'V^%Aij`O32N|('M3伎$泼xWFw2m̈́Nphz'y7h +)â\3\6D O32N| ȊɸŷY,`<*N3Gqd97 ̤CFQC!{?LvQP<3R.W$DtjLP1c]ć +j"f3j =e UL+}]^>#>T;*eη$͖sݜ5)7i]"a i"9kI ~qȈ/\2Wg#B'>Yͦ;(_V{{k5_ێ&xwy7'sxӽ MyQSnV#]>4!o2\55Sl;y /cn"{1 2rt8o  E^s<<}}SPG$#UWdz$a5;zD+!|3[_k G!S0Yp^KR1tV;brG޴rxGI!u}cyUU8FI@B0kŗwl^z~;z aXhP͉!+2HC:'E(^r-qC!7#FVmJ]'G8WfI$mK Qb+R,bå%bt%>{gH˲錆ѫW˹ FRFdȼ2 ]eN!1G;%>r1jkΥzVH_/6;IL&5隂æ)ćU5਄s󸬛V#s-5s<'G.lop?u/)ssiq؈i%>ӑ6'~MLػ|GXHxXy~ʪmkί -eUʻ$ѐJ eK) ct`Z=J|(b€b͠&t訸t#IS%+%zث48qgfhL)aƴ:PCa%q1"'$ST\Q'˥{5AkǚćBknm $엿=->wia;o?aύw|kLĬ̚bjw(:j+l0a,`%% J@ aPlǬ gJ)ٯ`aڥTUt݋>eI˙ʆ$$KGBa\oSˆ>Ć`ćb+ؔ;9f`{P]P/q-xMKLj"5W%RE\+%v:F9NF|;Yh׮3Tl⊣խ`㑡l>[e ~i.Bi4l ;@7RHkÉb\D%>x޷G`ߝn+;l(a {?b< xUz簘+B{+8}9Vh7fƦWTp L<Xa/ܡЦ$hXn1T+,6r2 1}PQ-{xX[8z$]OF}aб3G rPȹ)*W2**AV 7r ;\]hQY۽r4+JX9F߁| +jԏ[";PgWФS-2'X=Ĥ=oNu::43{ic>|l(ᮏT"%#'OhQ6Y$9EOWSj8[6~jZB5TeMq^7^YWŚW f7Yi`kScWD0b:z':Za9/J|(ȭq-;n 9|g0Ots(.;|M1syg8߂1̙\eq&8bM\sc4VP^d.ktΦsϦPC!7ӝ|QzCv8P;:1iz0&5h"\|(||lF W<=Gg]өk&FWPzJu 6.[·ev<3xz[dܸ(K(<(f .4+P圙64;.0v:1G~ vĆCQ Y5@ۡ՘`j;e>TSIpK$agb@S6(*3vڳppnzm)Js3=~/vsUA;_ދw5ǻp|inZ}lr:?SBXgUT" 3퍡8g>P0׷ `Uj@CnUqѵXJaw i8с=LӺ|=)bz*w訸ct>oOuНNxቦHϞXI v+.O5_v:f1G̠ćBiM;#*YXacc}cClC UC?8T6&&+|*fP;%>T;i]лxO/R]˶\%Cg͜S1]x|.=nGQĻCG<|̒p$%=Qt8YAG]K /7lKmt_T+Wi9(x}Id}I 1:*pB~:ƟTTx:?{T{Qq'( 9ݴe&Z',rjĽȅ=N71̸CGM֝ztϟoz4E+QqGt Ufw'^^qл3Mykyo:UU35Naav+ە12JapX#dE_zGu Qq!w#pyܡ^2S]TY76tV"P#j7^ (*5sմ.'* 9K4Q\A (_G3MQC#ƃ,f 9|g zAZ L$MIP:t[5 euZ@ QoDlU>TbG^ 4KF"Qo=K7[_qa>'[z*#uMP&U.q3]4?zXLT7`uB[_XbăXB\8jŴTQo{5LQVǘ܆R2͞Ҕ>WQ>$< \ƷtnCsoCQ!WTO=N\U5gÀ{ ..;mtV9b*G|D2.UsxC޻O[1I%>t;]ŗ,=.8I''9H` 0YȚ7t889N'J| Mwr{$%yIRa!sXi')~iZv:1GćB,98c> M0.Ceݚt+f\ WP\.*,Ijk VxE J]m Q5|:ڮW_WUUw(Ḱq "(09v܋aD7cć;biSkԕQu}AoP1QV j5nqRݏR7bk_J~B\,h~~w鷕 sv1{mnMQޮ6e"8yՖW [MT憑`T$O3]1CO0elZ~Wo鵋j~}z{HMj&{DxNP1Gi_LmZGڶZ>*o!nܬ#N ,)L*Rzrx PT5 *II}ɹE",.V.|=w"j Bmr/ ո׵ƼK0_?H,C6 IY%VBTMJqr]pmQCW-j Ko$^\X@M=D;tT|?ʔ ᵡT .2p)3prrAi/n>$#>fMOۧM1CGͦ[˪Hɚ9rQAkE{4]3 c2{ꈶ)9$]SdX2pc9f8񡐏XoMWTyM C3u'Y̤Es[#%IXeO2x5CBRW+ef;waxK+>cF^ <=+r$mS*Qe3ΗLw:gmHwMn]..a7Afn'4ɦX4"ex~դDi$) `R]]jcPg.iv#LaiQ6Gec A};55FxCP.J 1p'C`oi"b o"f8N+\ۏ?wi%u2MIHK=h > .*\r g{LcTbP}  ,?/8G{(ec X8uIי twB D. `<2mIO@!|l#?(9BI|nL26>K,DgBF U) ф!hӡ]r]N!&n/x=]| CN:H 8sL$#&R~h߯caȈ1 ~+zI`p sE7L|7z/ǽ2<좪L$JD$IZVGQ22y'?wqËiㄩF*JPo}yw]Z?ZG2?S!3diH сe{QnZR02*;GG8-H0C/VVh'M3|t&L,8 4_nsg{|JkkD2A+wӬ$ө&M?ƳJ' 3Y 1}l 4lUEQAW&lFU*/ۻUQf`G$6(ԃˤR(LHF ؎6ުPB;jqR)X$Ql_|k s&Cę: wfgie7e! ܩ*.p!ɇDm:#DPJb;QJ@/&~Lb3g3?Pk(ulEk]iDE"8u:.ow-%P&J>纝9_'/ ^e1ɘaDp?%* I G7  k1_-?kIY5t牺khLJud:l5%-_8eD82ޡR^{XVHd kgemσ^% y R_v#L*bǼk Oؒԉዼ8Jd ){|"Lq ϚU7؋*JQw{vZ_Z3(]чP5iĂIn߽ FcX  Se#c%2J2baW`b6%1n=8\E]G<-)Yx iW\WӐⷵ|l)"#=}z[-R]5`E =\~}5L6>\ÇemjV(Y_e|?3GLntjwj 0 ŸFY(k`>ˢƲ9 ggT}*e/ϯwL΋Ӈ0g!)aͶoֳ4_J`5$ 4ʗvvQ%4/_Wۼl~게"xRtfo߼3-R W343!y' (}gqYkͥ4EJ+DoIT2i"L>4+^2nާ5i7# laG3i [0(gi>-|SV٬׬pk^l<;{%IpI=Ӱg^֮w5m~S&Grڵ;7ϊm\}ۈvkR2H3ͣ^,WZP,nv*DGߺ*?zaj.jYL礪v}` U}T*׮Qxk;n6gV |i[o}&z6^5Nu3-X[;l~P\77E>*|U~Mo7d$X0Hՙ=2gH< *}_d_\7znauM\j&ثZDSldQ_w'_,W^=_lgu:x}ǚE/ {QY}&GT}xtKn5E1(=Ψ7CSkO/a=&1f Q}$>4׈]/[SZ OVV?;4G̊G٭ׄkƞz]2w+XYpfh;ƕIiؙ,M}$mlIЄƖ cʦc{>v5| QcWOccGY+f+WRi\̠!0 Z\, "=Q.d]TƋHd| &f|(>DV=05x9C,x}&Bl6| /̳^!$ 68d䨩@:k@/30"Db6wR$RF wM@{@_l5| Q؋Cb÷WMDlìI! tmϟbBT#M~5Fc{hLF *w/DrH$mE|4N1TF`j f TiT ZX!h X$fיV~S q" a=J's>5sm{m9lscmw |ҐTB"?kVKi\8]k[Πi#,b (1A0i}#81LV*?)g6)=3oY}zoD嵂TD[ PͿs}`>إ)Tp5h:n7eJ}S'}W3[/]Nҳٙޡm3M4ٴ(yAJf 8Rp0bSssbNC$ahK`*zMҝ0“X3b|81 E$V$Y$L0!a_uTF`T{=X5Tk #4F9w1Gd;݊yLw+LrwA#NS2YIg/uIm3^zm!6bܭ#@ȡ&\sƽܿ;Qo8~*&^ v=Q_rc:9\ŝlpCPC]TtXFtGZRjnl_Cn4ƀ`i>4ʙ˓25s_4Fbw囥:iII}S èd3R[a/rғ2O>5sL48%e6׫Y.XK.XY]p.d%Fٻe&=WC;t1(rZǴi+he4Fr&^/JՏ\Fz.o_j3ɂ4#aD$:$1"&CI~\9-̈0 "BCn?Rw>-e&+hɁa@Dl#ȁOa RgBQ% yQ٬Yf3n7>AT ;a:L9UIm5ri5Mٞrbuc̥',NkhqGqY $AM>HRЙq8̈-)-pwRfe6&6$$'P&~~}jͱC; 0)p(ʭ8;Ν×ƆIsh.HY8'2aJl=]fjiqkucr*ua7tg%%q/7Rb{4$vv(r.~uQMJ _IZT/uc\އnz79gy2Д*@M|pH u#OOᛜfso,}_%s$ &yzG:=$,|ۻs baE&}8i[fbRGڹ{,WI)7Edj$~Nɺ-,fK4J07-P8%Vd/;{YIJrZZ2 _}"ѶqogO64:4@g&D cwem 1R#m`,1!sh9CU]I\C;=q(:|s@mE;@C;?p~Ҷ?,-Q\/Sqι ww)xr鵰-[|>*we*h6iq1&6  Rz6O$U<4q6, %֠ja/UƊkIKNE2CNOeUЛ&|&`(OD j(]XZ j KHP)4 Fuؙ>wMNswO9ۻ-Ws` XoO)-}{k[w9B$F<ջڧb'peGb3v(oٻ7n$}4=|?0Ik,.k$/ڣX#M֌NjKvz$jakbfQU% a 0-A,¿bl]KC3\%< S:h/c =6_Vvg=:I",cZRbŦRx*CDI~c,,.wN~AX 19x2+vOӠUNa[]}ž4}|x>ĺV:t㿯_CWGC B h]0TJbJEu[ܶ_3_0ac4.|7ԊUxvL~R~7z={~84G/"a/Y0n>Zkŝ4ҳϪ' off}s⟿/=Jaٶ|.&b:\9o$6;IQJTZΜUc~uU6ǸSԏ85w ̃n4f:yZ)'T" b8g-$w$Hw/Y>vQ\< Tb8ry<:ˍ,G*+d`4Vnp2 yгEQû?DZAײq8dWVxbe#j.4Z6u8Gd ٺ\|ZîonUɚZotXiuyl ߚ %|3lkq{ g+Ih}T1SN͸t6?>ΪXz-Xޯϯي܂8lÁ _CaZ *IXc=C ucX=0E͠MJfKv{V&w]_]!,?t%Wak˷bCF IJMhhN}I=R?ؘ܏ԿMFà $.}8>yuboE1V\,+ 4J0KJkmj jcbdutk.oc bPjBTS^@ξe#Dc;}A,mXȾJk0O0$’^ Kp7 8 I+~#\er&{so"bP jay%K價X>HNतM NXz_$5un6zP;_؁K;fv.ֹ@gBfy *"H6D:+@ԃkyɽGvi_+ݦ !3S1 8xsn>%5uvn6cCr 'E?5)ca\AS<I>U0=;& 4xOMٕ sVO*p^=9ri{^tlSFB/RJ_|X} 1)pt` JcMTe.8sE<r%'vCy!rз,Hq ahxj0ӷw#?5+e:7[Bo-x"gI8Jmɇѣ"$&*s,!pe̼ONs- eLpႠbT0&֔"!lJ&٢0V/L!Nk6 Cf$Si4Lk@41 2#5nR5uos ṋPP N䂬BXh" is}QͳΎF'9$IULmk:_ˈㅏ.hث-U8CKqY)-.D|yY8 taǥjf)UHZ~ ~)!p VqeU`]#tG/?+c QPјݬ ˧tYɂ$H/ hg6? 6t=We*.eB2:l-\FπQR|i&rVA9w+MlBo^':wmBI5 Kia~)yfS3TC:i7'V5bi׆hV5}VL\|Lo'O#` q'{7RZo+_wCK`I?{ :%Y}O:v5i04*sa /aaԘ a0 BX`rSZl_ (I0S=7Q7[Cs|n8B#TVf"B@R-/ 'ϼ2-Bf).eDKfmsK+H͊>26SWQB$4l.#rhHr6OKi]B)yG` VHQ|rϡ6nO'QX zl5/GiFxo^.ہ:twᗪ`L'Nwnؿt5.MX.6oOR:UU7,_2Xe|iY: l wT{t-U3}%Xʗ!$$]wA/i -L=!8 گP1BxC`L@c*\1Z4U5FKBђB|hfC?%#6(aRt.|A~\kCu)mX4*\CXZs\!a 9_MiH!@O,9#+ ~}I"!2FSc(g00s d! xptAט }@ ΅@c䌠Hߦ ?/`c5uvYvчSi8|s]iOR )}P4hT2G[#T0@6b#V(wsbܺgPU{NS[¬LrGZ!Or1Z?8#JIA JUwθ^71l!`흓*Wq\}l`E +0߽_OrbV&ߊĄ V|z}" cP۰]P),R3eSadnpkqRS^"Mα98td+utчD9Yx 0j@ 2@YW4pƊ0us_sBQыDIk8|UأK^DzopR}.pNnN'co?5NJ(seHĆ.9:Ud ӞbfyJiH!0͕_H)UzLyv>*3ിi$o:C[Q4giԤgWJ2buTVs( YRXe,"M~v4>q%Q=D˛M}"R $ފVe ˌEe [ Pb@*M JL [&|O+P⌷Y"- ݄H.0y ȓS3Sh)Oʖ$&^aJN:o!)_oǶɐDvW-lՐ|qM}ȉ SN;Q3GKK=,[NSJ`._rqo";kuv]CB|.גT[^Wײ9^n:9%d&iK,lP2E))-1"?c߲XNbٜh/0q Oԟ.OAƧxdaW#g6.2Nk,oUy-_]nWC ?Dg¶V Ϻ uWnU尼n\Y›=vvHu=zm+ԍ_/,E$SΧY ݠu?>@[ӕ#wffq2.Ym> /+ -0@HL@E!!B<|/WPe(|!?2`3 cD`-(!N^ }/ÖauhP9ʫxGep4'FRp^덍BBzUiyiC$WzTݹū#| rKw>nG`굞DXѕiK)>lcvX>', VDJY7qxUJvc}ŬMcT| }8 WUk(r.Zr`֭' ˎux'lO5w5ulmP=:>9GhH-I͎;HJOZluS{BdPM]Si<1'loyX!nߥ~/RKy&D>]̤￯.‘lRt-؜G99}gxsgw5ū?"]h\臸fmq#jGMEyMM58؅-; %菦Eg*`dR 4"+jQ|PoJ~~Qd5J{ܹl~Mg업K<(6UN}XE@؂9M A [].p׆{C\>sԻ Y Qsw% GC*SqZv%~n w>\R-!4f(jX&@L*%tҭeyEwѾ"2Ac@2c!1Gh7T`tqU8?;'"!8pNJֺ.#`#DɳK axWTa,2-?aB91Ȫ@c4ZI8Es b}ٺJD &VM Z'mZF&ɮ$ 4g"Ĭ(QtX lYg(!"ڥ;H𑴣huua:;o%^V&5PBK0rKc9U gd`Qѹn"vəݹ kO`PԆG@t`mF.@-d&2#,6q(`ۖ8R窤55:T4)U`(h0K벱 @1^yd7x ݞ5ǴgƩL$H \<@$:KB}kb[ (uup4?J.Ώ&P$p4x[ d̅ L7˳ mJ3_幆톤D%P **9P/ȍkV4iR5N URoJLyDP cT߶Uo08TPO xXRt8NE!P7]ͩ!jA3KO(AcI\;)FJ"#,W.YMנpEYI'շynȌBƔ"QCVLm> XV=ҝQ[vg&=9a2YZ!*B5*-{6Cv *>?aL-h҇:] ^!J/kA(R4Tmr̤AbR|.dO+8%ܢƇJF)'e`8܆RŊd 'n a˲)1+*>,2MD!s ձ%R_"eĢM-0 UX|u*_B@ޱ昀o$Ow[F˛>A( Z63!ҹ\7`5PK!jvr؛fmzY:Nfzœ yࡊ [ LbMqȌBu7>=,Jܗ/<7\.G6|#(%pb9 sbz_bVmNs:]!/b4SIz0 |.Cמ.q goKƛp|r&+alm0_`:ƃkök&zx&U xs_kfy[>9'MP{Nmճ9,54ֺ֮LB9jEI(*MFf/?Q%Xl가xSI4in9uH,׿my_϶">ziIdL=][>wgIsĚilMA$&.9yt7q2%}rfn0dJ`\jI20CJ<=BS4Ԫ [K\1P_tۏ _z^&s9?]Ĕb$ ;ꦎN+}n [~ih>#@=MP]۽$y C4&׌SֶTsz;7><f$ kٰ; ,vZɷenxAݑOC^L>T-lwS;,q1hvqh\k X[76Ξq͍v}B6n2@€]Jȟ4yt靽isuJחf۳fM 5|M &Q;;gzwe<~آȾTql=BQɬ`=fl+6(#szFDrVhtT< dAi%FYcg$szbh c9Ov%gןoR1/j\/և/o,6_zz;;@۹AB\,6@Q(SX$NwǓ ΂U;߼߾\X(97Qw'Vsx,> 0gmB&=5ξ =ڜ>^Ygwl?OjԠ弩>xx?w==i{q#[ t>_wwϭN|k;;_o3j dm(dY d;Q/ݍstx)n~~6\5=|m__z6:ڀSw,?GGZHdVԉ$u653(mZ,G%6FlFw[?ܜݜ=>Vl;;aH>~VT/~?.\8H y *KInJ\ YlQZ"ʨS'2#";!u3Z 3Y80:=f⠉\LW팂jO6/ɆP&s" PP8>L}ĺbv5vgr5ɽa.n8-=0 *8N4 Fz7cT<{g@,=5uw{9Y vІTbp̸tE<0FgpDɹUQ߿VSvk|-#o--V֙_ߺ3$lUse(cFt/p/Fز$Y.u4F_plw8ٶm;ľj3b{jn!eI- .r>P 5$?0dBry1abm%d~!+OOE@SP2(z>,U. M5ܭ>;O4/ &}q>T3kLھ^׃c(A;m 9QXZ&ϓctSڪm&4 7׼Avq"󠴎U )R$*i̝A1 F~\11fgmnrořib4%-v>E;`@w{]BUMplLP}}~l(Ʃ*p38 .C#d"^CJPΦ/ʹ$xt.mF҃FRL3X+PxS^*־ۏ3ڝX1:O0tR~< UJgI93,ڼ0~QDq>#a;Oz`qL,$ƈ@BQw#ږ^ 3>=43%sĈI*'i >a4q2|jW~aizV%'|qQD1rEjpUKmt1ue_߿3.RHݝ;<0h;.bxrASRL80FgpTx1#DQӱe ZDP?qm115B_li_3j*' N"6ʋmhTV>S*˜:V|Ltr_N}) +Og]*Ӿ;mf/*F38 Z"T&a3hXictAl<;fwáyvvgV1I&wDxZN»!$h8L>wݹcU#0$ ,!IL9 ҥ-ctG$*}fɝaݠ6>98Tq$U-V_*OWu@bzSR6>ѥ4&۫m' ӧ]FbYF5aY87=ig3$ZgNcM owi$n 5?כ o?n.;@WՍۇÅ7ڥOt ]6mq]C.4ZK~e n]v/kGrW:L!(E(rY~O7x ;wZq@")!^?7n\X?L*F6Zpۗ.O~ln{Aaj t`O:_ÕuW߻IZ:w6uXCVOrwPOoj6?>~/HD_?6ϷS Wbui&,Sq`|l^?'xfJj۬О=,=<=[_|/Y7nWxpۢVv;6^µo>d߄c| bv(j9ep wR^_y=%>6 {[0u[HviM9NH]>ݹcR69&L6qΤTDzRgW02#hbI&9FxU03_A+cJU9ج Ϭd xz&cV[-!!+iG/;=heArwx!97%Ue[[RX|UKȅs.I*b ђ(&|~1$jghr=Ѯd/)]:@ fsk('%.)B#DgK4o[X$(#N_zFP+˧5R(+`)[ +o! xDRXM`t4[h* nxfF2a M '|,|J fRi ꬥ4dTs+nUS– emj !I :jr>`DR v' 0͡"0&'&VYoc.gIDDhSὣK J8";Q=U3tE6'-jd2rԕASrIRADdYVaB@)+-=*m>D{0MTDAA%98Ia)7 e<&e;(Aw|8#LA@'40 iKa9ɂ@ -!YPEVBH~ޘG=0xbŽ 0ac5Kc; LIlqpr9JaDT!JA$JJA`L pP%XI KB(`.0&ETUТ)ho -{@SLe# Ռ1RXkb&MZnʺ@J">M!*ppJ1|2ޒ2fIB꒟/X\zQORD AYKah,gBޟ%nnYf* jE|LL6)iҸi"8(-+jer #zI"e, TSDa&PRJ˹ J-$SН:Cؔd_4bGV3_Kwr@XX-$|xR0u`Kvۺ˪+%^b6àXJ gR#MEJ )z* {'12 K{I w8X|9PVC얇<o Q'|-aL6v'>IσaBW1$IimB4CRP݉ia1~v>߷%;c;\+w¼3!'B`y̾ ^JH s~DvIVN?i !*]%`&Yf5a"2o `6CD+@=49B- )H,Gh lwJXQH)g@JKui: BbDʢ xd *N;0NS#2$PU=D-o""Qå7ELSf8Ch3/H&p!p"E`k?0ҲBY qD#jQDfMDhS*b?{Ƒʀ8lFqK۷m|B&WLj"%JbnyuuuUuUw}"I(Bf+ZZA.a$ROJ/ϛq)"a7dLQDKvb5Wj%t DN ܰ۴x6Wad$,8Rpd#w5;CFFn[')mG0¨CepKY”8 >p;:$lhH Ͱc6Rk8RB%̀ɪ\.>lP xS44EuK5?R,\D 84d5!lօ P|L#)[|x ۉTf]t>pUyX$ d`vDMJC "`Cn ⇞q18 fii˄QUwW8Hw""x"ߢ`"cdza"Q3^)m*` $zC{ oqfy2 OYMV$TN椚-U`iyϽMWGK&U.'yvp!ft_NYtF췓W7Ջt_:.S oi>o/g(H}0[',N-0z?0DdQj3sս>\ِR?:S}#s$GEv~Tgfؙ2ET>麋4k,W{6s<.,Wp#m!?z=xm-_K[sN½3@KNNNNNNNNNNNNNNNNNNNNNNNA_/GZLGĕ=|Yo5.,btB-6R5?"R-ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣ<*ȣZ{ xT?xs'@˦' 寵LnmQïQK'Ug~?γ3^"/a22HAѨ㖓Ąa)OHlt`$ h#%h?7/_f0ʊ}W/[ b.vf5;OFf ?_jN7^&{HH;:c3;?e)FYDO^:8Lє,QD0 Ymh/i߷C/8@/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/ @/of ~'4Pja&(1gn<*})i_f{8.(RǏ) *y4HQְ'*,/c X猺O:K {` QL$Sug4KV. A/3Erj1? * XLJh6E8Eigپ :۷5;b?dv/-x2B~>4lGQOk2t:ŏ\Q:sRs, +Rb@sńќB+IXR˞wT>싹s̕匭i28 @ Q^4Ml<9y=z˪=^K b^~]\ZIMV5“j똨S $Cv^˔:@ˌJ6d\ &}r>.켻O7us4JR(𴖵ԖB0ugҞ{h7CpXzrfraT?..ehYC6vG鏽vI8ҴPpP!J\1+XU!zݳP{b9aD1\3/OyٟN>iW5i=yb#; >%v' \n]/&KNumzfշ/ qWq2;]Φ6BˇctVIh?~t`%UGЅ=W S]ݽ\1!W tŸc6b?r,uA/֩Z}ݰjI8,p <_iޞ%Q'bt]ֶqގ`N;N#Db%;,ũl@DphjxyAyL(1ne%B4AH[ڥ[j7Vkb`BzK ' w!Z:#KC(ΚEyt ηaMs .ZKwQNifd< "c߭naﴷfow,B:If:PGWw:v\6b.fxtw5bYۯu(l%_4:[C!3(vm)3{3+L65Cptt^,H^|b5,}{;یIiUĕ-?4X.HgTM/BHWc[S]\RV#u0 jz]j=DGsXg7mAL[E~ Ֆ(I{2CiC|FO^.R/#4!%QBpjZ3|fr>OҊMHQGswK{Q`^ ǔ j^yHy)H%%/ X~L hk\Ii O\!\s+i1WUKf-t.ʭ7zۯ]Zyӹj&*" _9T}4708whMdES_-aumsJH.UbZ.F*;Hb+PH:GFjy {/\iB;1p6o 7XLp??;rSx:(|8[;(;kgPd2IHb VFb A[vN^|Mx1fSr!.O(/W/}ݟk|FujOovx{wuDUD.e;p;7spG6DSΣdZ4!1X2s4(Y$zF^2͢}v} >}ݫPG=5~o,%yj}5W&]5缾~Wn]d!$): |@&Intcc<&q/Y[=K%Oxp^.|Ye&Fp}AC "Z)|_PJ˶K 3UN(ߴv4SXp$t"XU#;jO$8j00y")ҁ`~OURYx#~糠l|;kx "MmD>W{% &N)i qI7x5PT69%7SAeo <4{"?Ch-Hx'[/Sܘ&1>JkO-D4t.]%~^x^.ik(輍{4s*}9;Fv[d;yHazce6BxmqAͫqNVn|t)n{$pU >tἝH݌T wU|>{si.YӶR(sqW(H˷t\XtfP.=~>Pk)9KK]vӝr͒N @$>~[#W҇hl\wyl|Nt9nW*L+%>U1yj`{7"N_RV] ۣww};6GGljCaGnnӭ\JڴUQﶻwOm -X<9MyYq]qc^#yer9gLOOs~ aу rѰelrKLWm`4U /M;6?١:BoG>DhQﯟ1N_JA[w7_}s7mwj%`4`枂S47;]YS3>}Ak*Nġ=:[< ўٽ:DOZ4\N 4b $ F9e$HTR* 8T{:T 뤞FW="Qc&0\-ݯ(Q˩ۭ9ySPN7E?՞!G'^yY)ܮT*3л_ 6%D[@.vJ1qo!vqkCa >[ҷF>/0a/8n ^=<&mTE#\=qM1֖̏Տ^ A=Y5ګ&cP1>֓&&XB#]#5,ם==m|ԥD# <ˤbbj|Ԗ6ђ(zڟ5v nǫYst Gw욟;YmAn'c)<8ԧ=:&w[@eݟ^n 0.?fh?d!mdR on!@R' .y1NZdh -5YK%sZÈUEVKjD^q421Bp/ +͟O)tUf Ԣ^l(iv6xp} 0K[r2$+e E0c  ;1BbU˛d.e6W0>M04Ta(7 \̡\ko¸y9 %|)l{z_M6Vȧ[&Qo:?~]!DEoz9ɋWhr]c]sy׮ܟaG^F,[SywV]k>p|=(8*fwᴄ̉o(VDp4p=}'{ou-aZ[5ᠶ*Ekp˧QQV=l'6t]縬m:Yj]_dI >?L|El(6z Qq/-pPk^O|*^3_v.w'o߿sɏ'Î]iQI&X雹0·}۟xts2a~&"ufg =Lam1,wgVӡYXMUuMڥmvݟnyeAQ4Tx3ˑz`,'Wu9yn`_TÝIvr$9X&<.wA f>hF a[gSLFczfҥ/ǡ(Nknu:2ЫR~B,WA [g7HN*+3xO MORoho;cy S m_XuhuNvxj\L:AxP+?^(Q쵪@ 2@(CZU&3I%|*a qOC#{yR_B X4TR9` }N&Zlֲ\L`ȥ@ra'X9Cix|H (V>}+&*PWN \9+'prWNx/ \9+'O \9+'prWN \9+'prWN \9+'pSbY2RFSHp N!)$8'& _$Q'$,@?$$34$_KBI/bؿ]0҇~҇Q+d>JӲKRMr= qƈWZ"TR WS\)U$ɃW\E\Ej%*PI@W/Q\a[,CY{hiP9sLag3#&g(XaN HHY@s%x}i30QV/3ntČEtWݼץBxƄV nN;HБP:a 뜴_5ut`̧c&aHn|򕉉8cS>|w8~q6G.%UDfqwl]8qA^Vivrde 2{Q?fuz^_'ӘpVI'//^:\q; E,(b\Eڋ"I{9^CPT>ܧtG <>SLj6׵AF]=m.;J<}Jr55T_|kEΧQǺ keB|Ws>+{N=- gXp6 ԴZp.c,g0fr Bqc6AYܿ nsciMs㢸A x(8Y=c/W23F2*WwQBoqmUqV-\nswEpW[ $"niU=ΈMnxxH:i~hR~xBua|Ex{YSUj4v mOQ wB1ډ]% W;x:w 6wo%SW$8 *$|G)oSɴLKλ8j3G\ |Hq`&. 'WJ\@V@#dܲ?_Gq8~)uǯ\7 NYW{E0b1,zT~j߾ ›Wa鞼oRL8;Hq?STrO|uEAJa! 4h+$@=֎RZjȭCJ SA9vh9O,de\1Jnԗ;+A ;4)v^nArvG&:)NRm-MjLT׳u@p(+μUpn$TxP! u:]Y(mr13|x H8Lj+ #FRFCJ5aP(s?8JWu_2ñ_WB'*I^\ wo$XimgsXS+{V}{'aX%f~=7T0Ҿ}s B.U cH a-=ChGqWL5ں5Zm?\Q`jF¾[mEuUVKz|#Ucod:강k,eVB51;xOUP1G#g+5v7<Tvyi94cF;6HMT"wg殺ߊ+ Ӌ򇓪bvN sb[//r1'Q6 ?\O ~t70m- ApPLYNân8ӨXhxy3ɢ?]9.k[E`}NֵZWYunFb4O+_}=qg=؅٨8cݗ ,ħeaq93E`8:wyz|/ߟ;=Hxr~88߅u$H5 5'`fN̅q>e㥛`3x2AZgvH?0HkO2SfX @[ ˝Ctho4AEiz]Avi]ib^YfP tg&H~(替'OD6/G"VnO 7,ca@*AMi&nIO94bjކtq(]SM(Q2 '#]KY 5ϔdCe'\1΁PPOY PY` bV$p#`ݾbVRT1%V̒0Fe`a%YT8j!i4{Ha"jmu@U2Y❩TnS59L:sy`d;]:=K9F8LU3& :!JBĺxl2{vjEgq;Athg7k^MizUFpIۏ .{l4)|(;7< gQ|j4-nEځA=¤+ Hӛ;4C@i\e :rH/Zg-:H! Vugj{㸑_Pb!dm  "Y4Rp=/,-'iqV5U|*dg|I=ثHEVkS;ڎՀ:clV2CBU?[FqU3{ݓ;~u'GAe'-8摰%Oן8fC`K' lyxɟ \sX9wc}X9wfGkvdttvI<7ss<7͔jJe5_-wjoI'@A0VDjT: zw&f())f)xLJQa དྷ7d=R3Y& 96'H;#Sw؃j&H"H!"i*RyE[=9irZjEjA.֨<$d P-%,KTr)Se T7y[ >5TVΪs_M<@_:/Dxɦvb/gÆ8?|on脫 59-}xcϬs烳 Skd[n&[*1Yvs7.sAQP dHmcQUYH>*pNI{a-&;+goArq ]=8P1"LQۑ=):ҚLP ! 9HvRpqS$zYjmM5 C:kPLFy ]{WT/DP% m|S$P0D*"~DHh{WːH)LRQ %Sр8)zBoPB(ɶZY*yAҨ.$t7xLPd;smHVWS2FOӮ)ꤕ v?n#QS;? vI,ѣzd"xTwY(2imPffns٥1zRੲZZEc "YvK2Zre .]wnox)IQrk1WB+m], }[ż\R"-Ԥ%rO<޲zxΩJtb@2G,9z$5j牶9}cgOվc z(_Ycw$Gѭ+jAt'GŲRa߃͎֮TM*$k5ǐsXgJ^r+͟Cj'8 ^ZW!;-m߾M K!ˉl Șr21DŽ4^j @{^9$[s?zJGcP76S>VV#bk6>(?TV `e3lo[ruϣ6jӶ~:wy|Co8a؞o~]OٳëӋ;%F7N)NySʍ܁v<%5yˣհ)Z~zzEVݏo^_|u}z~c lsՓ_VǛI8;?j(q߶sO칑rכ&c~רݯY?ۓ+m>q^lMG/oy|z|8wUߺ뭻% e S[>=i8f68xNs%l"_6GG߼7W_^%E+3{ʾq["E '&[@=^wGV5{߷yf«z^O)=!x_}/S;Ynn~0Q5=tu&GtNR^URF7ݠR*O8KDY]CΠmO۰.bP>KvU_pnNFVOݩwevu`52TZBTeI_xlN*'\%<t9YŞwgmh6'l4 ֮H`yC9Yd{71 JNCL:Qޙ薏& /HziE )gaވDb+FYH =x{WױlUm[?nuVd@_?AB!Nó)JUlqH$βy2)=9EB#_X"eu6&L=ZpH4K4ެ }/ Il<{%._]W6wtz~vϿ>^ Ғ=x÷_Z%m& nV­ 0j]I=^-HU&T$Ӝm۾x >u0f$f>p`?Cѻw]QPV/7`5wNkQ(0l[E2Բl;:ΪjJ\Rˌ1l@LHOۛ:VkyqUr@%sN?>~.Hv_=X^cQ/OiN vLg,RKoWze#0AºĺVVF,.lܜz+@){i{孎J"xh&Zl*D$H${!6:K>G>yf0DATFhF)zh{W>W٦*VЋx{![N2aPV*-kx}C2f;~5ࣲZӄ"XDlIg"xUi]LN7$n:U [},#:?EBVm|&"w|)3!I6OTh{|KpIk*O>t|30疁iCOՖ?EaV.Ҩ ]LG@ %Zv}tJ! yJH˩!u5d1˖ژ\X}L&B. ٗ3fNm+BEACn?ބ\緟g^: qJChlnE/RE2O^CIiBk-pUH )+eDY2S9_} 2kg`=ښT5q5etho:OJw,O =2c{=P VL)C[ ]R)(ʁDEkSTS>e+?XSZ lSuDrn52*͘ʬ lWlfLtVsF#+>QoOb1֪E. J; RhY Sl" z:{c;PEV csvɁFvH׻,jL!xah Ys z>KV"kFa$s^PXK~$*dc~엜Rkhw˯d3r! jwE'%0)"8_uLq =lY Bm׃*yQS'2VcvRoEiDYͶ+ +hٻ vൕRmrҋ mR55dH =zRX:|1#Kba)Ft!c]ù}E;H7ϡM]_.m(o 4T|zK7SFHZE17treޛe6{_zo%,IE\?K5n=:7-,{81ADZku9wF\ZA[VDt;iM #SYUn-z W/vz! cZz(i۰4]0(Y?79DU9ȖXRV8dθiປJtp79G9DŽRn&x92J#"( -J)Gd$ QME;pu|rpnL?.)h4\ h8fF^Iqn}S p(q9pFDM -3c`SY*J`*yQs\%0*J`*J`*J`*J`*J`/q)ʬۙu;ng̺YsROfάydY3vfάye̺[q@ɕ+9?0iX,Qd eJ+g0II؄Bfˇ4Q,p7{]8,dt _+v=K<#{:Ql4*J)b+t@Hm9HysQ |rcm8*И8#[[>qCt\XӏL0޻X:0>:ÃvwpюБX.ΐ:󴞗DD \O_ |MŽw9TZn8~_+Go3|+:pp\m$0ryvqx9K#ﻟ_m&-#ەv$355¸$(1-V jUݜ8S kluM)Sypn_:ǖ_[֝<'5Lm[joC/<~x{|]E8i`&gg]zϩbޭ[D0{\{3zqm,Nu2yM>sփf?W'%/ &cE N{2@JF/t* 0óO4h ]V<)֜LJ)F;BPYnH;LsߝSiw*)a;T0.¨5i"SZڒJXdQOlǼ"M: WFYeZceut?6#!-,[CZsx?l`+D(R!R`ZZJL)0"+h3ʉMDpNƐc"zN)"ء`*"†XxL"D,Yyd|RS"!&"7s$Ŏ4И8'"M 1(;Ԯګ8k-/` 4l h(Zȥ>2ʼ[v& ڢitZ9K% f3rQb&y$tVFe)A@0X/5^#s?̝=BT?9$W =?_\{bruJ7ʒǧS?:nxg/\;}*%1ނ];j,6Oe SH$†dJC g$gt{9LY7 $k:Ha3dNF03,*8 yͽnX)(Y:_^ Vv=XWQj6qLJ`ܜk)TzeϿ ]Z^:_|jM -E\S:i*N ?8Rܸ#t Fdn!?H647i*B7c5X]+/i7ۥrfs_E>Ԏ,T/ N+qKW/z x;IJrB#%@PްҀ}41E#9LNcG&֥ϟq/$og#`MZXط^duq=g6:yࢎ&2iHEx"%` iP|.,D p.9uYņtպtb.ЀV3.ZA_gv~&K;]wgVouI E:eL7^:%2-U9i7KY&,K7dE;V!m"UR%fqhOѤ-hJgPF*cEfw`kZ@# ֚ 96 @T!+6^Ԗi[lzk7\jOQl d`,(U2xi2,cI1|-e|-R 2ZPQ¯<,q{|p sfH"lDѨH4w,~6wÅ<齋>a SoqGN;ù`1X9\tIKVԖR2|%gS|҄A =&qy@ pUC1WImCUrFgz>*&\T1WI\\%i77Wɦ"34WKɯos T"=?*8A =.GAcO6SkINů\"ͼ-L^Km+w4{?.[PvQ/_/aꞯAFRz}.Hda78ʚP L,\-yF:H*0V?k׷>VХT#2ɡ׏-_꼪U>|[~Nih' Lv+.kmH _rH~HE5~AJ+٦A/*L&˗v$U¬9Mwm$]tpFP,@DŀZZ)uP,QIYŞ!cXIbPM@l}elj8G[2j>sء殒 &}kFMRj+(`>U ny}0esӳP] p憓Z(7Z1Ow|aL5}7@$UfIG;ZjiDMUaЀmSfI\_fFu mV/]+wS2sFFUa.@*ŐSHp8ɝ).yd`īo9PLi]|;=ñZhuu35E>#s\_i%lKP~l}5ז֋U+ŢwrYojo:u5#]~nlq vЇp7b|m)ƞ։a^y9$+L4VgSF1ʝ1gZLr&r*;/@Tu+5Ёo9GO7l)4KuP/%"<%4W\ɜg ʫ /r2M];o`{IX5'7A\棦3]>~4׌FƩlx*|56WFQǹ SSh #K hhRu)rOSIQՙ-O?].o\UY j0FFFwYW:nh<,7oy%N4Xy,ۢH-(Z}So>z:U*[dvu&1\ͣϙ[6US<@VҊbG!Z[=N+",*$vcHXG9 ) I`VFe)A@X/5^#ZFXP)_ai-3^B}.RHS$?=,&N}Lk](Kui ƀ0DL"l 2`Le.6@"A. Gg"#(#R"%gRɜRafXTq0c  (BH&6Vqc!Pɽi".NHY:6j6qLJ`FȸU0S̷CgiJ0F/jQJ^WupE__gL~t<=5X`HF)ʫ'g:Vɴ_VoD0RL݅íO^Χ NE! K kɥ MkK XסO3d#Sb^Wpu9]V?qê,r7?]84qA7mz=, F3/Bu=Q^SZm]7j kPZ> *>=^ٻMԵxAZ7VE[uLGR|?w!qH&q/p6,33FxQ2l7(\o{0G~~O?{ww~痰|Sԑ$ޜ5 nN͍qZJf~9zc7\oG0M;JN {Zug JenD [Evd3mt ͺ56G=&6\?VEIbEju;QhM񑊭N>lxNDi ?7l4yM;eL AQ%vS;'驣 3#MX ,ط )[o;Ӷs[lncL!%R+0 C!" BC΢HJpsݦN+:0)c_5X 4aFmV]7)5spk#C$mk 4Q=(2tLb H爄$/k>o)w,BڜEs +J, D4]O7IZFW-=-"T)\S'VFI5rK8*s(j9RpxaR[ExC!xM Z#g3蛹Wa羜f7$(0 W-Jꒃ,pep^Dl paA;Ӕ ̥f"j$B16^{ImzL @TJcA{ PH :q)&eU=T 4 c&a4VhOIz񜦎4`΃J9!5`g@ҀO@,FadHc'cj2Dk˟{o ښGp`@0Xd)^VkWa~9$ϽV@ixvQ& з[d+& `gꦂJ`s(߃1H0[:׈1ϥU #i5~seL;ro& y* \rmP"8Sd.lm`raռ/`iPs:/Ѿ^i󃐟LoD5|1{rcc?[WҏUI6*'j]YFG?.KTe"# Z07aqVZ(ڜ-rmimw1oƮ[jWr;>.v]Y_ݙab]CZKr׿l `+D(\!\`D$+h;7\R^X4RDo 5ED;o+"†XxL"<,YydоO SDCORR%nŜ(•k ٜ6nxZ3#yچ-ÏY̍sո>u̘|ۂ= AP$=a^XjE"(6JnYIX|x\XN;mGyɝO\/G6S-%f #7X򶊃7a?<"ʹ(/3xr qG3tny$XLjAu7U~(XgPngHK6A0ɵ r B($<3@cLWL#,8ZI}Kn'P⾁0r5iHv}}._$XCf>_TSU]zGzJF[P.]w(qU]:;/R1G Em:U#"( wmmK= iV*Im''kjnP$CP) !AU =uOOJ)k%#2^ȷA:7TQƤ|IALUOWGA}TNX%v'LIF_?FAS3`oj^rÜ)Q @A6ƒe ,<$XʊMgE檓`/5XLpLfG2*)g`Yʈ!cF{kX=~9pFMZvXy_@x?ZT.wouRCAWfOTdωU!BT̫j}UH``*$ qޕ윶.+)ٕJJv%%37CG| @RW#֕Oju5ºa:Ri4Tu8+RPVW+RVH Ť+RՁng+RTuE"U]HUW+RLi$7CoG_i"9D(@ba׎33/A(cS}sYz՗i|)ю$jp O 01g}x$Z Gs> +*!dbQ+1#Pf٪<+(x^+iE\x jR3!Z[y9˸J$aD:vM)@8F i2Rʃp-X/5μFW6u~Fu$^Ƒ] @*1"xqm"ĊGO7=e[V9w`|ŗ'[pб umepbTF$baCMdJp {B@KkH./<_VT@.-qyx2vK~}rc8GHf"S0KQػQ d͑<xfX##`)"ctI J1u&í_SPagJv*s.YlB}\:$U:\*^ zv$:0c,>>ǑҖ"GOƦP"7 };|ĭ׻ގWN* B ӷySTS?N܀s#L~LV4vR~=tego77`$W= |AMI[] %iuu0!T=1۪nHu7UvXރ0 (h8:_ٿOԕxIVZ5WE[5*a.#cM!eIT'rFxQkײ8q ,o@]w׿o߽D]_߂(0-{f៻` awʸ -yWq,6aBwe@['~mY]'nDY0"e ig`$wMۡkl/s}_*B(1nz;~T~UL]Hx?:\>nzxNdT9 ~(BodҀ}hi'-cI ʄθa}$5mk1@McܦW1dL!%RV<"f:a Ұ]Y0x%ιnSM"b2U,3nIMeJ~y_^RtKQtrqЎzIiNج "\ }')}pnD3qE.5xY~MnXciҦ,ʬ5TvK8[KP=OPL$Fj*e;Q*ır` ]چK0w#0ESv !̂aq`x$5`828/2l paA;MIW %3ȴriE6-1RCˋ0 nALőp.uXeň%V B6x,4 demrϖ.ut[!w¢-&@@RA壅HGQR|@hj-I8qN.B!,gd#?ԙ!ceqc*K(F[ :aFhNPfu[踰4K?#G(MۿDW!JFOFi<[|,Xkop%oKM&|CAKy7bq&u=*8ZuqSWQ]q!@-RW0`HkU.mQWqDSWP] 8^%vyr1Y̖r:K}(Z2 㴇h~_d2_@Mbx&XzAĚ JO?BiZU5GKչ8J&;5 մĘ"@˪ݏ8ywo\o̸_R`vy.Mb~mM_y%z{75yYI?aCK*8>MۃёǿϮ (1r1IMGeIvad7ҍ\.5IW:|g|qUOfܛIIu#bl0M~Om/WK3e~;yM odt7Nх{Cͩ*ffO%W/! u|SPK:·w| Hm ϯx/IjQp=2cχI9Uč[&Tvclu&~%6~X7pr7` җ?@=0Q4%j0,BVRLs֨B{t&L B"q7}f?\{p(f3IoO 3Rwml 5t%m}%r 4;t&6L'}5DϾN`!0:& smPbKx*k2_BiOnŎRVtrvR!F|<1Ƶcc+%%>1{εHvUK~`"creMȓh|7(*$%d.n:H`0 {&aXIgssmBɩ*|hnKǓCkFG-n?GQER+%GXĥ69,FAoO_ ?$d)ç.w>V(R/ey!˩{%~,zX\k&%5*ڨ[Y3B}"#9.'Jy+V0v[XXֹ\O׻%ÕoӸxj4?I<Ws:y~ڛe-kJc3VbHʴũR-#^noY.vKFORo?`u<*ilYnXaGd.UF,M*URˬ#^znbΡK٥孼9\n$Ŋ)뚥cVˇRq*Q֔RAW. 7pwaIem”4n2JS 4̃nZ&6;kaD7tY |ʕnZA__Wm5Pq!hT(QX[ÃsSzjNxűjmj 3OVvsUL 0^{XTNJ,1b.v`đJHQ>f-7B cx46k59L~sɬ6D6ʊ S`ȢV`ֆ0O9C2-Zdbͳ'#(G*h ?nG:u-QtQgixܶ+ј"i %mpm7h[ HjƍGZvӢ%˖=-ԩ%||||d&kekdZj`ą($Rv5X$D |ƑA1Sݟ/g]tϩuS(\4wjv{MN%=jpYަС~wIa`\シ 8oXgSfU7y$~O{!⾀*ts}(ief0f}>@WE˞tNV(dI#0EcFE5$2kE̢2bKV6q@0:׷^o 2 ;Utwv sL(~=Zy,Tp$b Rı YLe<O{O?.?t΀,Jvhmtz˱z׳g1}aWY?xlϠvJ;'VgK\nr;H4i6~G=pXE?l2İt,twwNߓQ_ě0/ӼRoS Cmb`Sg噍}u_N&Ğ@rydfAC Ͼa#u* q/* d>{Q^x~6Ϊ}jIaGjV%RgEUYl_& 󙦳 6»(?ɐX 2O] 㝞FNb x0!:7z㜥x*0,},.Q [0gW_md~*I@Of>7y1Os?` /f" Wm>kkrhQTtviA?*G@!|)z#e$oІgO"FaD00P,ctBa (DED3ⴤK`,Nkx5uW7<훢j{^'ݥvY;.lu1)tj ?O j űZ`*^n`kp Dae(QqM@Jb,b̙D3 iCndS@7m&d|w{[ٍd'~Z^.\w/[$^0O07VֽU`;njXjxSoPԶQn{kۆ2htes{BZn#kmr[ʬzlM ɫEſۇ^1=tі{{/IsMw3-z:6jTzf74>{:+7EyV{&y$?T1~mx'u_kJ9nóAP; oKbrYxݲ VGR΋Vk*;~{tֲNZ9l<_2PI(c6|P`y>sQ2|]| |T qAg&S).#s/Y'<]MG Viu5' h߁Swp|@{2hlFY? "Y(C ,`kY&տ#w=g/ ϊn~@ NmE<Ճ[ ye1[&`rݰvy G_f7(:.޼זJBo-f6Y| x۠X͋Aj/DyNP/翝ykKi9wװcDQ/9C- _ N];">U=I <&w]:9t=C'Y`٪9 vێ;jq,ŽDMs((iLuāo6bC9L @ keL)wu+V(X,kvki't^G22!V|q5OrewPn;޿ua!Zzwe %sWSpEQDH&f*vL8L84aMuØ3$ 4ybo(v*7DDA♐D0#!1,qPc4{ ,_3I Py ¥r>3 KXzlam4JSaT,F28 R$DFj4<4I4Z(ݦ2MM'F#Üw C$A QV#IXc56Q;fci/H R(%!)à9JqFZX \8*f(qrEV{7UA(e{*D3WkmWn\Y~m1FpU%޼3 o$w od"Q9qc-U\F>ƥQ؈@ uL"SEMv6BiH$i")TjHx avG!!C\nb?-pht_\`(F16 #4c $T'#l=Q"Q$I̱6`M3)sc$YW =JIcb_i#(2Bb4@#w&) 쉰 -_DeAeb 9%-oRc, 2d 0vH1Τ&}c.gu9xY]"Ov"P8&c~,( skbH j% A-kM [@?#X$`Mfck2˸UakiP7pʌ't9h#X?]>p}ѕs}SJ首Ʊ_ >ZT33ȎE (_:H&"xy"Ƙalz=nIRVpТqGB6mgsӼwqnl9F 4#Οp[` XwO /#y #Jv&\]M|KpH,7&\S_4ʺ99 =)|t}R7 [huGN!E_iPQ/iEЌC4RvدtVRer`"HO\>ϟ$[Ϥ >3]V1Ȳd^1Yfi̭mNQԅ(m9"Qvz*N"AMh !A~;lVF.;YG$u@ dHR{AD޾DPOt/8Ńv.9ȖJHͯ8KfmVmǾd[ӟ@hi9m~̳!F[ZDW˫! њ^JN=|hkmH0CCr.;HfdiFIdz~$%S6iI&2C6ǯUGth'&F!~dxK]Y0,DO@} mQ8 ٥p 䪋u'0%$IěuM]cߧLtEˆ%DV|ڵh:f6Ny v58H[Ro ytV_VPkx{?>嗨+i(`a-iۢh*:U/9-ÌX5v`I/~g]ҊDUb(L;,B/w~2~.3w};V-殂QuW{7wqw5{ooaj&r/Z(d3fȴ1#Hk#u(0N>[HeM<%ه?O<}# okRv66q+o{Ux儆f2<6Ť9~ "2+>'Z\//_Nޫt&3nhpO8uQ+ iԿ#&b4]x*vߨU5/egI@usܚ$^@dpw>r7;)8 92 AP0vY4a#A2rVk/;^s cϧÙH~y5U+Lel̋|_rq;Wa0_)t``8rEP .i{3'`iXR ³?t֭ŗbt_4?ycifkȨ64ty6߻xZua~+_&9^ K;93s0j?9_lOg941.ߋa0l]x߅ϏعhY E*/MȢV mR05Z*[q(0]t`/ vf8{5}9Y܃̧g󱜥.~S/..`t O7sL񅖌ChN?`4Ijvy|dyK] r/WWMIɅa: 3,gZLe%ѯa}]! /HKڈmi-lʫ"gvP f ?}^S}X ¬nf { y6 㜌3ţ"zN訁 5CHU4DF `Dc"x!E*Ld0E4IJJd 1%0lYr<%l^L@/pz#m`!UA&"Ln5FX i6̛2u<6- {oe^NHzv%D EԊD.Qmܲ|X\XN{mykxwٔYW}fA[6Vf.7+moȾH4mGLF0lGR)P,:uv鿈 GY̘C:,NcʠzCMy;ۙ)MLryå O=cF@E NVF%?7FA=f-<"~-]_߳ 3^&APPP̡^]zWj"=Iཟ}ځp[ `0ûKå ףO#/@MnE卼ݯ紐?lGvF^6J8J0ВfgZ~ww]T@}YP: UZ7]DV Oey?pQ_ʆZ=bSu|C\UOȝ`QJ"3q1`O D{h[jF<+d͝Pat;5A.g/oJ-S_LK>Lτ7NfF,]T msԛ+O sjFv7^PWi\n~u0mTlMWY,StliR)(|+oeI7hL-59B?xlxY{%$>֚ɾ> Pt֦J|/@S4ILp R`B<UX-zg՘IDk1hFs+$m2:|,㇛Q-g"*i#;t8:[L%njt=+\sףi٧m|Z^΃8%6vz&g0T2ἵʺ[5otfM2\Ygn7-t;u>+ǽFP^sޅg%vrVn>ƬDIsViV.fye+J[;Yx>KWsӠbu |$pYY5)Xf,b(MF`q-WyWyAq򂫖Uj,)ܳ#"9#^#cE`Q^sMZiP$]lu#mCQ{0rHRIpxaR[ERc ҂ N&x[>apnx .aΜP3'4L}ÿuqv{7n5zX`w X#\9y5[҆;'mW&p) %8ݳyvz AzN9NsixD0Q+'Qp* ZT(D4Va=8&Ap!uuAjF#2\*밊a,"2$T.!HԔ1L"+)H괴_y;.ύx0L޼m7_sS5fϊ'.ʦ@to/U*4dY7=%tnI gegǕ]sW;%.Swkr$ւۃ;׏.tziz| ,wϙ+8g̞1ǘKuϡk3+3/1O֐ګ TARۆ͹>aْmC%=n'z)wLRnK[n! mMXg%S^u7/LX;iв">z?K3?$#9.FT]:εF@<5I3 YCF9JwX>-z\I_,K3X7a˽q&"a9cdP9qNrգW>0>tqهa̱u%"!ZA vO ` 1Hjmۣhre1c*TPƽuH ÌМ-{Z.K|4ζFjz}3P,QnVSqUzS ^'zkZb-]0mP50ZbY}_[*K-юRj5V3#FwW^,5G^Z[Yfg޴W(B}R]&Fѵq۳a37^2 )lg|:֤t1ZcWLp2 O4gٯ|}0}'1d=+՟`ILgٛnϗV l"l_:XE SD:רs)•vFc`Wyl(KP& 8>.*Jg[ط0/8x?>3C_ +p]ojγe RYU*Eq[SN_]PͧKO7 '>ËF|-z;+_O&Fga|xCb )0Wk{R=9#Q~+PL!HT;<?(`$dG9WI];+Ag,nֺ*lIs@. }3 fmZ@TNV헏3׽ ?y~_?.0Q;x{8KaSu h3lճ=(9]7eR>}CwR=IH?-#!9 [g~&Im-+>dSS|b[Lq`yyKR9G,D3zp@EP}1~MuDnqx?%?nx$O%T9TPް5ڴNZ^\b0徽$=a]|8DT߄ƛȤR"⑋TcPHุ(0sx%ι֩ӈS}з60pbЀ63;W#ŵsKSrmҢS_OCiWn|*dzEצ(YQ5dEP8h2@yAe0Ơ %K,,@40  "Xp* oJ_P9sOϤ) $Gk(Ko]O-^ȅLU|/x, B*Caj3jhye4z^[ ih?FBͧ|W y I ר]QQ,f%*ŰtqrMq;>XJA;ͥdEa٪/!ZTMc(7?A(y" Pb\_>jK ^Ȍ ^`A ƜT=R=mO%oi5_Epbt;rU_^o}T¤>OP fh/#hHHLBDN&x҅}FY15i $  rܟ+{@ 4ҥhXKcψƴ @A QzHLdu49fH5Ac5F˖ZJk)1))b맦h U<^a/m={sIO?>&G`\!r -%& }o;7ss2I$)7I W)"ء`*"†XxL"H,Yyd0'HkK)b΂w_8wkRêg 5xOL[B|z˔9nc|JՈ|7{WؘLHzr%D EԊD.Qmܲ |ל iKo=n+jԒ+wd੺,h 5fO1v zarz+,:u~?.:E/np09sH[G“X:ϤJ-boFt &&VQR1qV{tE NVFvdٱ#K-=,}ew->Ic yPzT6m![UoO䘒U=`R^JD_%"Er,RFpIɜpq֠Vk`!eJ9#LY ,Na+WBxA ݨ9rRn&x9 Ȣ`;R.k%Od66@~a6t`dS>/GGu\g\Vܗ)eY|P1W{6CIϔKpO'>'`tq 'W'eʫ3zwa+Oj0 ` ?a \pݠ-=nPa tQ@&, ~Ag\gOQzw7 I7x lV.*|x?&O廉 K PqٺβGŷ_zsqL09lR!oQ|`Y ) gIb1"g0F(Nj4(.Zk:!mP0J=pxaR[ERc ܂ N&xv07ڪ׃^fg y,Y0;(X#ԥ  \El GHQZyWUR\-Q2:DIb9KjǩfS8 Z$whGq$\@KeVѸLi*GTZ1QMKc5oSB`j &4& uCdJpJa^iGjd~sVk/;^sXe\frk$$7vz% څ/¶մi)?L *t4.X25Zgj6SfjE%GĮ`]%p1:v%XdeW/]Q%$GĮ`Ѱ\)gW ƆUR]@vŔ Gî JRu*AZveeb/pRg}zp}+ wv,WWrCNNx르`>% ]?F<w9p/ɣ \͏C JB[~H2͠\Z*cyF H}$8=C4 ?o UكÙ}g$IHPg~ si3dDs.DB@flU+vqe`AT+| ǿ]gW )ݚrkOn[ tIμ+s}9;KAxiX4b-p}_y9? 8 맫 IVebre*Q͸+ Lש46 K,] /٠UOr.`9׽ۺ漝iaZ_p+.|6gUO2ħ]&Z?`D`Si^8e KwQJ]:MA|oŠ| T{b I𥴵q| {s+m WAf_hx2N? `6]dK]qÝSp /6'Mnd֜,/ՙJ{GP=)i)DvlUj\W 3\Yvueu(O4nuSKƺJy8[k=, gI^r rK_v Ys0m_h#y!k0$!B6Vܨ@\;3Ty>J,rR*\Ɗp.w."T P;%6_nPz(5޻\b+Ɨbzŋuwfy(oe]OQETV MjT[H]|DiQA2f)RlR,۟,\wN_X/}IDy64u*nVƍM^r vLkmAKW ˽|JQcxxj<9,CNy| f0mh)fP$Xn W^̹M9erZ HA#&| >HM>q+vhi:ޤx ją8*Dh \|4 8AI 5[=$zfW _vu?hŮ塱+zvE[v' A0KPjk]H;@\7e 9wn-Ƙ=ii]%1,HbW+y,c%1@0c+D㔦H7.0G%W=C)QP"-0%]]]U]WW]Zw35jaXjKW/.w{x{PZ9yA tbR1A#^79N51;KIxA/QSff^+)t9Z:V2s$!3#^7遥ۮOP(:a1\ua͟G?KT#AuY}VW*Kpd!֗)(kcQi0)`5eSe9dl:z 4Kϭ1—&2<ѸթƵXsJJ>- hCwq(lџpv^%Jlgfe{MͧG ^s4[K?=zAQè?MizGӄMqTG9b=vS~V#`|@0O ~(BOP;9!W `u8[ `wPD辋e,壸zIJlMj-(E3qP;c?fW*g7f.',=leb^L?l `V9OFy2rΓs^7r& A#΂!KY…e B SJ F0P[YQXnObH{GmG!6l@kAډ(}N3`&tFYo)s-[c8#k%X>bFHJ?AQSjUWCQz@jQ$ќ>/@JҽX%+Pt"q&IQP^?~-heV\˼NYJj?"νLlX1h˭X n(c$_:Dh0|gox"۞߿Pvp_@WK:1 Ivx;+gmmխP]gsnb#urk 3/>`*)Hb|4*&a;, ["U߭U I'!'*n!K5MŠZϭ7Q_РVHՈH|Ȥvqeŗ!юBF9 FF_#;Gu,[Tnз!{޸}=5V,hMb`{I]B{/裢a9gl?DjDԶa eyB+쿋uܧ ڢitZ9\RI`dsU?6ygbt+܇'}wf-Xbt_Ka (WK ℠ $5(K)a Kȳy`j}fpQy)W){Ft^'# ɥa)S.)xq].L^z3zP]\|uJޙe΢% ձF9w5{T)t?IqbZZ4cX|&?NHR.6K4D`v&1@?LE"jbth#::\]w<]SMqJ:;"E Rrx& :03,*8LFycAP@N,=ݽbVY!cʍV` )&B INr*Eè1GOt4%~i9Iݴ?ea&?P7sp F)37gvInbC2.)ZX!qSDZרu!Õ޶#W(:KP& >.*,Jg[[ǝOh}ϔUI~| ٵ)HD1mNM+P}2>_qê,dG=t9mD n^ m:q GszFO1||JI?^vUgt^+A:^WUfc7)9KM!,YeowUFS:v:ח@߀8:{ݏoٛ~zg瘨;=? 8FeU h5&Jc˛3t}|{ڿB v<[=+?U*&M@ Fd A=#ɦ]SܮԨ'dN!~*kq*1bYI'Eq*p]>ξ>&HEr# Hhw/HTH!-A ( AGЦv2&ڠ(tJ6LK^ϟq]ӻyI ³HT:3i,AMd)Xȅa` i LwE"u.9w 4zK- ĜfDkK%?fu40MhkPwkQ)"x@Zh)\RU;3 |Uف`/OY1$L27n[UYLjeD;{r20-p[KF'o1Fa5o[t~UZ I}+30bQ\ʖeIwL {e6ÿo?(oM=8 \eCvV'(qË@tR.^ȹ](6IၱkT V0b:vտݓ ΐ\#e5.[ano*IGk-zd0]^:9"daQ§NSāM)rF8N3$u6بY5*CzHm_jRz\TknXe6anQ 푃bB[zCɍÃ17a=|oPRpR[q8~\ݴ"kzcq:qn:cUX*601Wnw.Yo.Gc9NQhZ;sW Rx9L0GnJy$"kY">i ?*I(I )Gh`EsbɳyXkm챹nsaL2pYfq6U% nCҡ 7WTJ  q+$r]tѮrnV"[nqy55q`Nɍ*ICo)I j~rХj:uRzTIT+DFGd_t|c8`TG?U| G#(!$f}ۓB9ۋ?."Q}Y 4j"'I.6}!v|CXN EQJcPA- R2BTY4aVZuOmmNv:VYl'\ KcjY@qqIb wY>WZp}==OfnV^q'o!D!e֠ XX1c2b=6MVHKģ㍫݋EPsQ8'O T|h\#qsN-@Q7/ IژrroF*\|AgyfvTd[+{%&quc&iEx#x+>jK ^Ȍ ^`A Ɯ\}-Ñ 48y+OT,J?\`x;tT/Mۿvޭ2*YhR}k %I.P!`V0/2.DjB,G9p\e"\H☁ i*,  $C$iXG4 naw\,G;WcZad(QHLd$&F:łKR|ftu [;P#(~Zq@ia4 V0#)6gF?{Hnp^6g Cvd1 Y,y-yfحՒln0iޚE~Uk]%}m\H\U }Nˮɭ] c[>l :Ң!rVLRpܷ+CDtJQ4JʂӦ,ASh$UGAd9ap i-MrL'C>&j BDŽCVLa[Қ39s T3Kgm=a%T/1>f>{}b97)vZ;;7ۘ}]v뤹Mx% Bqe*é% @\OZz T^"4#ԦޖH";,e38VڑlE9~K, 4ilhW\<D B>:tTJb+/ßy 1*?_XoPfH`KvQ -Nj QAgiނ2bKZybKƖ|Ž3r{3io|бy\i[v{ Ie 3,~)swlṷ쳅^.%/¿lVv'L4p{z3aUJGm]7.E,hJ>3GcdX$8uf\i(U qR%ARa X1n]9KM$BFņ1!15gZiS׷ug2C4;oɔ>{;/1^p2\֋ 2h'҈12+o{e)$8?|4 u ( NF5G$/.<.[]gӣvGӓx[trk9dZݻ WklW RyE+,]?۫wT:/]c٬+ĒQ_}Vʧ6=j^*}=k~/rVbpn4\wy;eҨ|ϗlƖ8NBZ:z%rtˢf,I4p̈́s?q\'YƕćJ t"XU1cP+wWyV_ [cUcU٣Ui '%q9R2a\+ !@<,Иhfc}:Wٵi1=78{Nj5|s"c4s32\RU1J;c)P Vbz@;Q@ρ"{wui}˿0!ֲ]Fߞj1{a#~ȻzC7_fhxw|&研&31hL%Zz(93i Y/^;B:w tRk2opYDFʻAQrp8pҁm;W:i4c )) 2"%j[w,ٴ#^ 2@^!6'oVo61emMd;=9׌-{P+\sjߥPb5%p{/ >FJy.x"iH#c%8|_xރFIZw`ZT_:Qx&c46*H=Nn.X˺G~+X*7Y-ZG]™hep#fRtTkdPܡ@\K|s =뤖ų];f8pCmS2)JYQ}DdYp$=. 4s#9 e.P *Վc$Tw dof X EfFP s+#e2n5J4w;8uG,ߕfB*ZHE h!-TR(KO%Μ8K9md9_K+ǷxAoQr|K#-+Ưp \8 rr-\p )p \8`,ȅE@.ȅp ]p \8 r@.ȅ\8 r@.ȅp \"fSu*i91TRLR\;HuTBYh\啐ȈCe-XxњD\C&ΐ"ǯ6x\Ip**D|"SwCi ƒFBMG-AdX "s[gg=jQϪIŠP.xd, oμQwR^ XޥK< :::2Fu0t}w3)%?Hxnx"|%"jp?$",qUbUB!|L8Y@[@:T25k)~5]!,(r jCcmƴb\o2c!/i#xco0>aom[kzÿ]k{ͻpubl} pm:} iPZ2T5/jZ:FW֣@@*JsHUxǓWwu\^{*ʸK)ٵ΃ց3ɉY+ÝQIk2Q.ȨGE;py@%pJF"(:u֝}}mZϪGn"}\#|VU34~K ` b ^=^ZY9^ ZJ_MﭠU)fb[BVN'&J-QA:st!ȼ|iM;h׳03/Uh;U<p rZO !Iԣ5,j&EDaRkb 'Npop uLZmiK@t;ΞIu~F7vq}{8󍝯-,>gw2M|sg-eF}huU1ťQaDhBw)H)J &vBl|iU<3چ:^((ʹ(BnCQL 7F=X Bg B>νTƆyw{ꩄUS >˟{ ItX>u0B+(͌CIP1 ;kHNp?? CY:$ #A  qA$S(pp%uHGE7"H k@h QmHl9BfqQZ甥 "laR # U# ?'710J~^j|jRAos3zޭϱ7gRÅ1DpJ^ckoM̶q&WԠ" SQB9aWF+)Wn}7D=\8FkzE{=qmTgSY<-.?vǏ>?2|O?I m V ؿ&.VsaS=~aC~ w*b6#1_nٟ݃`q5K^V2-7L˳ ǿF(]5͚F4]z iWv9v.KB 5_ČH;>/po oi]/GoqFPQ߹\MPF] h:ؠh$e4%%ֆR9ZΜxKB;51ʬ @b"}_ $E8%5ʥN':gE8ٚxE '^13w](. :.mU4Kif:_{[qr$59|682y Q !,E0X@}t73 ^83 ,C[TO*QKqv.>;rL@?x>Z>ϻhm=p7gd}.l4Ż`Y 1P#xA༷TD҉X(Ow;+Ҟm&>$V39|?Tm m6v{p&Lϒ :%AXǭE/_lOpcюovRsaOi7V&H e痒\S9Xy#J)Cي!ǯiH3pLJ*(%NdE^a?y̦napb4S͓`0:?Lnx 0½g}+JftBHML(J!&bXS>PNw$aI*7 xfGx(LIP,|Y̾RyHH@d7$왉Xw$cPBPڤ4KY<`DTQQc ͝"0D2b,V$H!%%B#Yĭޢ <{YglhEfT7?h"Q~G}ظ^IW4,b4;*=f cɿBxiCu}{d?")Y"Vo91`4{_ZW(a}2njXXNUUY*%y vׅV}}7{A;9Jbf3q \@&1+8-D+ >_g<:Y .BóU2x>~liHq5u r锓VOaFMv&qi^yw/2|'psj|NA |2ꟻo=JUՎqR *p'υAw᣼k h0",A9 JTdK:B/ZºB.Rdl\ g+֐Ȝ 'Fiy@ |҉#AiMq1o}EW}.y'Kaj&|U̙ Jnn.#lFFkVQJ l66OgG)yk .֟GrіIct ,&FNwO Of >yQc湑aootluh&2qwG8F[wSҜ,2n8>Ѝ藕Zg~"\kazjgj›57ԥ|sHE<\R1-Ă'iq|]u\GvUtgT4lt[B:aW:OHg~H=8`9oNzc0e7SD  ҂KA6J>i U$}9O/Q[BϘhNH0hpE,Z!>>ٖG d+di9J["Q'\&Fjΰ'Tb.kDZ1ee^%WXǵGluuZ'Ϋ6M"e F?&;UۛƖNwy^{eޫ"Y%S+Iaj=V)*ibs^DƁ=~!'&}\pM I)FQ), q1xFS盾\8CX dzOUDCl#TT8/-f>p씸h<&,JU@l!H`WJԌ(iJ8%%Oxʇg ef^M"Ի-V4munf}7'o8Z]PNx:Rs):3.׺ mWXq0 qf#2_ʳ'݊W$azvNz^&" A@,qxsp &ZO}H xHDPNERu8j`sۧmt f΄u *~g_F^:(D1WWrc؍ 3ds-*"4OOuk%D>*T,PxGo;ߖ@?{o'mm]"CRlEpBv>VN1o9w?0 bۢ I[T̂/[.~IxGthr/\u1X}(JUkBK+LJK~6WzU`ne@O甖V@d۪˹hx69߲e0myPMOO9D?nLPxs"p@F Ԋh(yVi[ե06{}ۼ:ܸy׼|cX{Wf޻:h;Vbtja ;{y̲"`ys[{2.vA.v.㿳^\ݬjW2qFZ7d.֫zg,{eC0vL:k"^Ql<5> XFs8y*V1[hĮD?yS8nsjd* a7 yQZ\ y{7s`lt7ϟY`X{#JՆv !칺q il_7C +pb/%q`9ޕs˔ 4RUړmJ~8 6/bp,µM WM +ѐp]2 .Iӕ+Y%|l!{&^0 hpD^.I{vHNuO)=fVN@Xn]ճ;BWu!։"36b\եdh)= I(3}>wig?DK9ڻS)WU";X v"a_wq946asD("C$z+ȵ51| $`ň .ח"3P܊(O]L|]$!Řb_;*% XUȅ1_wFVոOuK` з}(VcqAԞRs\[=.'$[+pjI۝mbgw.:Y0)d5&kx4g58CIdѼBFv*0{#Ws]Eɪgլц*}͵D'40핓Qh<0ĕ00eb HʟJ⚯]QOcqZA R'K0QRIc6P8"*{RK XBe.%"u'@}8IsF 1HdJ{R .e7vZ3A}%g,#=x/HI4(P1F)R1L0BJ"hx)nNF %JHͱ*ccW6GHP@fI mp0$!x jǀyqIL`>I-˄ n뙱$SI8aũ3mUVa;xUu]SzQ2Ӭn'j=e}Qq纏l|gci8:=\gLtö\`H~1@))9ӞSĈv!Ă2S5ZN,X[yv/u-חyM0qsM0AMˮ-0'suʔ^dꃏn*Y9{WGqt^*ɲoZ9K9XGgMV-,3|gMk6xavJ ,Oz`ilk zqĵ|y ,3Wpf˘-c1FG *1ĽJ̍>; 덶̹8p+`}Xlά|u{(tTNT-Pysƨ$=*~:^XLliTS&07ѡ{wYʋFᘷIWe:JV}߽!^ dvI2j.׸IԆz5fa!*XB$$R4N8{xe@IWZUz >7ǟ{^Y+ƹuN6LenRp}tg1I\s6lQްtjo*=WRT+q%O!zz : ͅn> *Kv7z`Cj >lp9ɂ""&r!b6&ߓKg ]?, iJ]jwQ2PTs .F2 U3"j ehp& F7@X%i%62nu4[mMr=)ZKfs-m89x8wJkf.y+2gM̶5>EPn+ZDT= e^mWT!׏$#ȪdD 4ZGR̜#|"e*j^.6wmH_bm"42 Y,E<,bgKݲd"f_=v}l`d#[I%AST:ZcIHc)9ag9 !+JiL  p*av5NҐ0Rc"-,QezϽugMx1w(uą:4S.P==lZmn˭UϷ;^˵-*1YEqCTș 09GLw儉sZ2G2BY`>I *x@#f4dcA@αl9D/WoD+턢{F3塻V`u: K嬱1J`uhƑ$ `Ff فtNLBTgS-MʌƱ eT4!2/[ITtHH;ΔA{/OW*H Ch8JfUI!*6E CH6jg\Rez`9Ũ0w .|,^2W'.? [Gx> 'dӥBr1ï2oࠤ/ ~7&?̟UJ.AX8)5 xg n5)a2c θdb˰&t_&O6{FDTd.ycɻ) MnJvg$ʇeQos~10ԵJ-ѾfZ`t%znZʔͫV)jr^G{RVG;K_:񁖏tvx \~pfH_w޶780*0T3'(>LV˱$ƣ.aƺZ%PwtՌnYg3W';ژFV?Q,x4:e'mg-ou]v= .uR69\FJÊ_g-S$ yP>tߣޮtJJQ9ieǣA8>@Oxoo!9|;ZqD#p- {]].~X{9sʸZ)~(~< %<²([lAAOdAfUt 2=pXn  jڛ7- 4}>lӮ;v%^p[b3Yn|n@Dm]>tXR:ߡYIQmH?72:2b̄ 4hfpթX6̵Kӊq;. 4"3.M87,M.+NJ;'<0Xf:KQ3e *ϑPz)jos a>/IuSO ?,,O)OhW.m  87<)hi/ԭ7sg)A@tHiQdDƢ&#>YNUuFށ=D.#"4MCc nr,o>%#-*t%\|Fs*WϤcj}1{o z{Qs-G}- 8hADM%uYL+Z,ҥB-A*lx4~nk{@MYp d7!@񞋆 !25m( +l OsDm3:#:ϴISf [, x2 Odhu^{f'lڎ\IVgTY>08m`)j{֝uZK9ÜmՐ<ύ+–@BAYb/'l.u3űWALū[«bH? Ë[|4=WɆD55M]uEB|2ӗttnHg$`$3yiLA[|G(.!Z^<] $Zn\_wc:da)g㙏ŘKI, 3pRyo9@"-f.s.{gkB\t5[B 56r &k)EԆEF٤ो$By *7Ȗ/'<$ن巘(1 Yړ[WCߑ.}ugvЏ^px`ˈvJ\}f[՝UdjkjOӡm 1MK6όrL̒  c^DsR}/eubEQӛm է&_'a wO 4n8?pGu42dWr3{/(zAj F̄YM`u)pz/H²7RZLK*dBX"r֋Te0cV± Cn9U{A2f gL#@M $Y#A"H>` 9T! |>d-[%JZMJڳ  )yJN&hSN;5taK Zp!nzͬ=UxWz7j|FHA3Y3 `;VavB]O}`_ qhhsڳ8gƙh5IfD{ {2Ї_3mtY``]D91%c⁆h%Q.E&9;k%Rgnф $|c҄Ddh %yYB&b v{NUVqFJFt\S 9pٚj^"RQy٩M>r $nw ԛib:$.|^x#hc2]b>EfS8He9:oU9Ѱ2&X&J%C:UZM-aH 2:B,β H%)_4b] k5;ށvUw܁J/܁+t~Xpc(>A~WR:gtvNWWS^)t孛4tt2>[ۯٝ_}y5;JgTP7?8_Lހ\Fp{/zz6t\8sf^_=Jkt|Vq!5RJ߀.F0k]A`=|29zc[mnD߇ógb[ -VCZ^WOg{[)+ôi 9x)\9H%b[`!`N";9:J&,MCT&;p'*y@"1[!FX 0RKR($\hˌ R RBDϻŽugǝtSzGtY3ӨE HeW o?;Co/c4TEVl|UeT4P5|[>}Sm$b2e3;_.͎A4mG}ԃE)Ǜ!7mBIc _Wz~9]Ҁʚ )0ep'mHs&Le4`Jɱ%ߞq/Mֽ&]f`;L? tf\+ٰ]vkm= v8L)EJPO=8B'gDuI̕d iWWE*J]kl ,8Xry&Uh:%!\dT\D-(U,rAZ"JB#j3p! X԰1&d=}{jw i暆~#?%T=0~Ƭ 4/@ 1AL9)aB߱xe9Ÿh]n< EQTxm"gAM`T'Z~'A§MYl]Ql%^]vri&4=ڜW]jh&{ ryEC*>)LK/s4YQrex엏f[!l<5.[5_^] 5_)|>K]|uZ鯉GNƣ1$}K|/E=[mseF?|lϪ% @^)9a^)9+%g䬔GQ)9+%g䬔ҫTJJYRrVJJY@䬔UƪUJ*iUҪUJJY)9kvBaj*PST$5IMERST$5KzVon~Feuln~z;tKez=T[zGRI5fNj椚9fNjjg-I^;/a% 8"d)Dy!4**)CRs^n\،I[;sfMYj֤5fMY^}^s*='ۜ 29[*(l<fV~fJ4$B d5g fvߜevٲiv˲evŲ}`_,|g^1Z ]e D9Yp3Lʀ֚$L&0d]>@:^8 % uQ;C,ǔaNF;k$9^LJFgz8_!gqX} C& $bk7kRHʱr^DQ&"id3.7׃~[RG&Wt# %@3wvf!riMH3 A4,q @H^ȄJ)Bԙ{ ŘCHײUYaPYE Ho]$^JtLB9CGx&IвFΆ㑔\T kr࠳0-Hx)iϑ2G.U@D9 Y#)6i% mBt'ף9ZtH >p1pnH>>b>Ef,)bVN=+@ErU3F51`\#lTSCmCHĬ#L*AB,X΄x$T2R(7}} #ׄo3sH|`x1%"2qETmx`\fWNWd|J00 LU&?dŷN>AyBrl8k9 |8,}2 ;5Q![$[e)v_Ar'`tt)Krm67ड़=iioIzL KL_r*v`kewmu癵IUY0&hde3 -G}PɃa,rc/@:&ez~, Xh$cPZx EFN;k]XGן?5c-?5IJ ?Ub]bW?Wɤ%3QŨI\pJnYc g0o}s FZ,pI{L"gqg+B篼OIIDTAFRVJ*F?tSRB_`*")4Bhe6y֗(Y)?rU%>e6Ve LHYr")a8 BgNW Dt'n#ӁY$ZƏac|wLr{I;ͨiFiדfݣFxj懅Xrz:|IY1&#t1L‚(FU.0/J\r+}T>b֞Llm2$]8!$mr*E}> lNȍB-h́aۑm?9.y)hqdKI\M1Ql+:@X9) 3)"4p@F k2dE2U- AlWz'q?1 E/VN$j1{a Ԩ XM.co˯h-2v|AfNM}}>lj[y_ΘjnXX޿T Oa1 ̒.䦓yܔ U5.oӘpXܠ Ey\CS"Zt󵼛)*gLx'/!5$  eIh\X:xaX[D7tjc)[sD}і6l nٜȖv6t\MAf :-wLZni.ޱCӖdqﺶe# mGtbzxwˍ<9M*n3Ih Q -F̈́w1ɬ\xa#wڅJKu! tҡmri$g΁rb(jܝ8Ox[r}r?ye~O'D2)Q\xCV99*~[ׄA jQKWP>k4X:F}W@n:: :Lwhk5C fH"CXUIT YQP(>9:utRƛ>Ҫt=VZᳮiգ5jJ;Hu HnRE<ݝ&`/~Ԕ@&zB$!+﹨(ۡ=uG^ցsYrGQ][Vgc3m){1$CTg@ >0P;%XZ施nI[36CL$pJuA/|#N\1fOI)]1^=^ \ЊP?Vj-\Z*!# B̜|LkҙsmI!QdwprKz+Ox==̗VYN*:EIt$#YDy";H#.fPpL J'Z8ޒh JI@LB9C2n '-NkPmS?_S+y~͔Q_.=yvky6#7it$JABT3H6 0s2siA|* nǥBc$A-Q$`0 dTfMk87%I_$~>J(Ry 4Lʻ3P%-ѡťBg]L>ΓL`tj#,$BAy+H ;?ښ6j eT$$*"șB"DE/]A&ɜJ8)DŌV}E r-/-KS tPjLE1|,^« ӤT"x-'~ѥBr1~¯2T?+Ü~]ԛ]{gMYjm^ }oŐ,q)o%C8K&ܰ7FKexq7 )7iٵVDTd.y11z}flz\i'7ѺÇYAi8)s/k)VKw1nV"kvr\g|͛ve-:雰]4HM̭Sni#>è27}fw#Y1S}샷11FY bsJwKNzxKXA~ 5$UHiq]^Qg ]O;=Z桚xӇ˽п$v?w}~w?/o?;qD+0IM5`g9O,_釅2V_n?7ǡ3rqO#W~bifhXhA~|o34|hia ښ 6W9q:4KbBx_b =ҌL g EJx"J-bw,p =N( !`%At*Jk3ib-J)GÎKUpzK!Ŝ@1a! UYԁ{%x! !YZNﴄ-kPׄТzx,O.T%<cr{ކ뛻74_Xo mnWhf8L'&NSh'󖻔. LEf1l2G٣6^6Z>'Fh;+6km0Jv的"T&+MdbBQ*!'1d\{}TmjE4]nEe}GwU[%#5>ٶj)l8͆),Z~}IE܉''ٵgTGY/%ȴr{Mڵ5$g{ȑWm0 ng0aA,ȒG3,_C-v˒'VSd]UUXŜ.ei%MYdV@c;k;RM/˥ȳ%]jE/AANfEI0il4*f,GMY p|@˾" Xi`fZRahIh!,Yub09y 'awbu2d;Θ*j%rRH!: KroBY 5ڐs'iuKI{X-5LVQ[%?x~(ͩHfu -ܻ$IĕfeՉNqZ}M^Q`INfRM!J0!(]먥z۟WWi~ol^o^vHkOHyzfs5+KUUЯf*%}f}CW<02ЂmY+I0r̸J!*HLXz:)ߧE?fCd5qҦhmF`MN%W.q!DAY)ɭĔLGQNxZ)UZ A@fd(E.9h95rwND qeICřd;?R9϶ç;ϟ>"m'Hf&fKɐ ፓz'.J![T>Z|TZe'| Ko>ZJC&cAK.{}17P o|۹cL)nwfߐDLF\X6pܳrilN[0ttFU+*W*2_|."Xt2֋?jgJ/ ((uL1$R9JH23a,Sqʜvr+8Z@̺ `;~@fy^x~(me;'5KqBJEAeRT=kIl%6 ibe==g˼$hiߗ4!Y+mIDh ' LMf2i.·:RC0uz7YeEN;b!X+L݁s;4tuCCcZ#%o*d2=,[sQ( yVk㜅_l HeU^&*!WA(&.9Ku@RȄCiΔd2Ie ALZVϤ (ԉѴ /w-o<(G!˃ɒ{{dz6{O]xۻܺ"r#iQu= u7ZWԺi]Zos8eʠ6f!"nZ;=<{hC}9o[e!8GvwpmŰ5_4ʆnfaO;|/7k˭d+j6嫠h0q5jv;Ʌ䬌BB\W![#ϴʆf}w=Q-[{"aYDClIKe٫{wS"y`9s^FC%٘C!H?L ۑe@qV=$K<&rBtii9I[(Er0 ޲9[\NJѠ) g:Jfj86!!6h6 WJǠ ,*XUZk1f̃$O٪0CBr@aSޚ)5B{K!$y&IOeYi-Sw!H')#) sȕE˼qJK(+CN$#ǐ4NIk,gؤq(}P-ThBc u u!r!rnIq>bse Lu aQh\ Aɘb߸ PHJh'D9ON,br&v2v@26Xn({ Fh mgPܓ;P+n.-TqbZ< eC}ւ 0Y *zS d*@LpFw[~ҙNg-dĝF i-AIkxJAJIHA%cV]t+!з!FxqJ'^M3/8\p$<>{q/bM>{>y.[>/kq ;:_ヷ9bJ5Jȃɂ@ώb,C4EV&c qP FE!ֲ5@6x;/@h,!ޤXjo[eJQE%lb܅%YMJ 8%F̷V.zq3ifvns i{mӗ|Lé;JR(0tY'7,lqs{y_&W&F@EO)*q'[f'!5$sdv eIh\Y:xf D77vjc)[}D=hKm6..oOdKm:.{IrwqnY4wST]ҊNvLDd̷G ٺ,;./s[gyE,rmc1ΨMNx=ɌM9R8-3,x0 mօ6IڙeH9޼ҨtW&݉%I o@.w?O=2`= rt^D ME0\uFt̝a* 9$ϷcĻ~,W80fqzyhFγƍB;l]{#_ԵtJlQ[M:-K BjcHG͑iσW Q$ )٤+6=. ?˃HT2d7 %PV@"O 8:r ˬUl=CJ@X"c7zglDVf*Cz6թ}Үj'$N 2F¨Gx!w2$d CBΤCtxP*pu>!(\F @Ƞ,msY@KܔjfWzS6,u]Y,SfiYC>4żBy+87_ k5/}3M]}3}&.3-E>i/p'c|g\_~iJ[jRF+oNI.j DY0҆PY gibwm,9lo4ld XƔn}zˈ?{M}V.o#Hz@+{}pzP+E.~Wkh޴+!+gGqV4S3Gs%g} F e?ni޺7k^NDUwӳXZs_<蟞V^sOEO-w÷nI#7 #q6pT'Ze W&g}㨂mu1ɦQSNʙ,.d }2{+jhvh?v̆N :g{qF'RG~x}Ǐ>|}z_>}x8bMᨉL¯faj j3ʸ^Z)?~0}ߟ ,#rrҧd+V\yzi|vbП⼋eL/BRiX\Ngm8]d'ψt , ø.3ڦcwʡA],l'uA/>`)G;j˺7qUn!<0Kd r? c4$-hT*Wb Fi4ZEw)sU]9>_@+tf6䔎34(/ "HHE0QqRY&c,瑩gmYJb>LQ?{V]lNKU / f ^OK[έoQe]ǖd݉YUX/KaQ6zgڏ܎9>U; a>۔Vba'L_/ʁh. ivҹ?!fIZoB$ \I.d/@9Ek'cI$~7 {\a/~'Ccb&9Ю4rAP48|~Kh 袴QAiLޅ0(,yPHn /SK[mV<ڊ{{uIJ..U;+6vgimjhS#V>0;b^=^u VN\GvRj{\hh~4:< +X>}*.ѩ*SXJ%EsW]iO ]U:wUuTKURQsW]DtB6BJy*JURCsW]$ *;tN]Ui5R"6w ysH<ٚr7-00ynka000)Ѓ4_vriL?Ϩ?hN8bd儹 4 l>7]œAUZ#MW)msӯM#wP|=NZ}0)\}mrG'R@MrLm]]Hl\\v.oZK-$?มˠV׼>__ewC>Þ* բO~/?mt@>kA\yg X%~3aXFW| |u@;Kl/StsyfObpf:WDxGtݗofMs9AsJQ.`ptQ*lwwLF"?NhJͱG4UJhz1JIxFؓqWUө*{Jj<_s8U):yaǯyX|w{+ .vhI'\4Hz3;X#'B+fok'_=t QhW6_+el~lMelW6_+elW6_+elW6֞۳ro*aefjW6_+_;:_@RlW6_ >9W6_+elW6īN)TzUt+d᪴Oo#r´6z=0GYWxU}Mz`m8`[np~yw~'!)KDn+غg¢Js }NdHə"uIҗh}e+GnǙ%^GtB}&d4YeR6{P2)RO% e  R%w G$6bb5WM4jːPc%w}D͸Ҫ.7u]7߻qd `f`nBۛ-%&h~'%Nގc [N[sX]T{<GTܷsj)hdYMEKŮ'Ǻ5aRqA%-ɉ$i/%g}{YW4cW_H=BjI}Ll"Mެۣ8-_< :?^}?x>KXC3!(Gd"A HFd GbºepTb."GL(rX/ aa,Jpa@Ά)eIeU{@k(C*D*-%wҗƚ sV AZg-]!#5NU!Lѹ ho$և-h,YSZI[Ŭ`ImzvazVӸNoVw=Pډgo#Mo~=6_SL&tP$viK@Ї!lqMeyDEQ P )hYD{K>u(^G(.y"ʄ(: dLRedo*Y9)Hol=2y}5<ߏEsm($d.qgJg&TOy%B!%^m啓zAQw`/x,AYS*F#Y"Zlt(I=dHb N,<r7`/z`mDIr4>8MsJ]5116A̖}vldcٔuk{ZF,މR%ؑM94+>aohkxq2D[rLiCQwpΗi'^F4Zomj:=֍ 7'o~mH@H4ɐփIy͟V.vg]w$2[vjє\iN7TјH8-c"{cLҫ"F$]JnHESL L.|T7@E0 !UPQқ8SXoA'qe1F7 y\9O,)fODI -%bsUWP-)АB1H />R@݌㝢B3}qwIKd3x.j(Ax߶@Փ7laleKӺB%,}1ze (\0(*ܚu ~r>t6G1y4DVP ;up Rf.1QɂP;g.%q/ o3OsKoUD~cI_Omm7w 甀^Mhjj9ϰ`SJpE[Is%j%k:y&I iq!Y\[:Rmj[Ƭ sӘCK&PJ MPD+Fv>'QCu$(Q( QХt`oOx@[SS\A=jN}=P;fay>b !&?Rl'0l'3p$Nڹ*g%RJ{\$3f HfvzR=1LC4L/Ѕ8NvQ9gIEÍMHa8h"Y(R-1Fc>(E@1"&ڀh91I0SQXV)>>o˷=k>]fz o~z RQT}Vߺc1O,5$]ꎯ?\<57]:2<Gs|C+ڒn~UÝ/^i\7WW+w-gD7jL<7_s9|\tsOL5Mϕ/) k3xF嬥sTT/\:qp^^3 1|(!n$}ˢHoqpRCxfϵ̼|oB qqjs|\aH;"@o>}_X ~VN[k~-~aVѥY^7ڱfIi L-*&dw[G4FWv1r<̬0,K8nZImBsyY5 6>wO[lZ+%3Ni?$E2hB52r|BS{$M>;;F0}U'Mڀu)':k) m"`$tIڦP!$VEIB1"~;i D!QŃ}o ,P\(-iVzRj.SPLA!0T"Kԁf(αMjiVwNDX"`=Q˜o"D+1x/$C=DXѲG}%e2% I~vbvBҬM*vU;SUOokD0dm' OF WRZ!qZȎ!šQF ;: )^}YXc}*'-agky`/G!54V?1&PvAV҈ J>\ d?'2/%ij-vV"vrVBFkp֍:>*l0:G;NJ/}mvl㼬¯x-t)ܺ RY LU+HyA3 _Xm|AE (%(  LĴyE!}R(T)P|bu yq `1}Q@VǃV: Ed06}""Yԏ/XA~( Ӭ(adNI!/+IAIt'C>}0--v?b<IrC*f7ȻK}Y]P>BQE^HP&#)#,C>h6y2Ze U옡yWC3K:vU9Ō: ڮ-JmF΃r:ՉM T=viEPAzE!ܠ0̹[uCϢb-CA2֥gZI; 7]3,f5*)8\2*FjWr-W뫷 ^"l{ otf&= pHd_m#5J X}1sɺ6Z4;kBٸ)_罚5l<553 v @][Lq [-d0L&Sjs`*@}ۿ,E0USi Iwy.Urz0 3|=n0J$צHr1##J醇lFmF(B0wzB #RT"( wB)ЃV#6dXWolWW/6[Ȋ"D W}nB\ \1};?& Z `S!7BԴ(!RXFUTk h5uC oeP?yڰ(RVXGϊ90`ci Gj F5h*F (I<(0SA"PՔqOp2gBA6yQ!D śZAK9qP8kL.3/,~P˙-+Zaċ6 у bL Ǎpxo,ɌS:([{ z %)XI%nb%L~avNAu6K".ń@h d$>D~ׅK0q9X DB>,5P] Az3!zLeˣ^ v9Y$GSS!4 ~EU[D*4 i-YE.ڂĤF&MR-ݯNwg+'M_d}hm;:T{xq=*V) UxU:]6UOƗmfքsrը>rG:M# ҨES#ߪ֍7kd92h,*$J(/B6T4sVȵ3P)V bkZG y^6 }|`G}J$h)aD&4M$h"A HD&4M$h"A HD&4M$h"A HD&4M$h"A HD&4M$h"A HD&4M$h"A]4D.#"AC\o-IАJ"AErP$VsvbʙgП}M%5C|Mkyl"·i,kCYӯ%J"lexU }3;ƩYMbY@77G, q?kv޻Pf-0ߠZOwY{4}>㸛 R~RiW_Cy?8 f˗ݐ A{Or@W÷ 8,ݯ@`r֙FL5~m|yg,re2wX&&̚,ǮN^ٰ(Ql6&ԣ^CJQ!uh< u!@vWmz9%E3_˜!y[[vGFRuIIyncZDf2@me~ŽGe8}1l27Ǽܓjyibcr &&'M_홠{}➋aL~.Xjjf%jx<*/o]Ѻ3,T U! ̭bh;t?,ʟ E@'썾\G:^ƽ¼Q7\ͪ"+t',J8 i9pQiZQ->Z S wE"8;Zde~}̾no^.Ɠt| 85͸y᱋K%o߄c}GaK{]U7豞[omq+S7tCym5ݠtnP jAM75ݠtnP jAM75ݠtnP jAM75ݠtnP jAM75ݠtnP jAM75ݠtnP jvntU/B:OV$fɭs(Is`r٬8wXӥ@[9V_P]7Zמ\]K,O's "\I.e5EfU,Z6(m*K8gLz-dr.\Ztzo!En 17C7wrg˪)(1?&RfC\p4%8LԹu".ҭH:"wG㮪hU֋CwWRsF +QUˣqW2,J+١*rޠR wL Kym&OƳߍ..xIFYSiL>d'܋QmEȦdT%LbbYO1zm=CjZD:_=k VS or= vG/6b/ϡ'?,jЍY3W2{s ~5?߽!>~>Ct-1DN?O-_;hw~ճ[磷m*W#obNFt jFcmwe(/uZ8+")e2N嬑$S dx涶s;sրyQԾ\!^qjOnh; ;OOA/#q?0Y46d\OXkb(ç *Hf @\ZOjWC$M+ 0dV&9'|-K 9 S<`zkF_]Ku8cKe/cD}e7wS 8etK4whu^.Ɠ2gMEci߮_er}~s5>+[6s豌9\_d4Ǹ皸~4kLrۿoޥ_/_'/߂]U?v[zvsA 7o/a< {|h<8=(JٛfcA W«z;͚'J/N<#|R*!hU k˹)bAA:{*!C'gl֮o"?zymuOBEєMp;RR9}dmm򵕑F]KVZuP=D۲`܏o^g>SЊ_^}h|-:p/]IϲܸbF]_lNmkL%1#,S>eKZa'Υ5Z uͽvgI{8+ HKuEVɌ; x,8HM9VfĦ()56`Y.V*ޱ;CB(1ⵊ]J!b䥉u>ZDž\:klL>I:U(͌JPk ->d'=?;cs]IfDq(!2x(*:b8vA=K>g*=!GHMw۾C92YlG%6 3CH6*ga@\TK'b,aptQyW}B?KF ^ F EX=f2ۧG&AQ3D`JT Cj8u4 I}a#CSQB9aG P\Y ?>*g/ſFr r[O] $yԁh65+p50qAi\KP>/oS88+/t=tFT_ۇ:߿_yGoʊ1ig(4 \wk+@n4vrGԨy.]UWɺbv- K`.ܷ8gvukNx2jVw-w!M+qٸWhZ5/3qudDxR'8A??}?~:̜駏߃, j'l8X::oɌ৫oߏ2L;XHKփp*)SOy,w/'?PZ[,͍ayv].6ܰoX>EL#AuA M>&!pb}`D ƍ55rQ3( tA{u$+j+co$=a]|$r>3smcsóc@EF3d\頄W"jf`c36 rHmZحs?q\3.µ[Îrm2=56XUJWP06!i`1XbYO|2 8%XOc*J $6+M9X ,#!I%E5O+;}mXMOi=OJ;h Q,[4T.gbQ;ewe+1J*6i>X饂Sz!VL/;q(}w6|z/;egAzo!nȐO?~-X Zd0DB$ ϙ+%Lo؇X^wV:'喝V9:f $41 u Hm}P-2W!FS## ,5gR A:2T{*E)|8~nU{tϬ(Q-E.\\5D xnd_wq)ySG re;PaYjc\D"S4Ko;[]*!`}==3xw7][6跽;1\fEQZX6uH>qA rg:kЩ:JGDE.D "3~qu;wuZ!֤eNj QG!{EL@9Ny4$hc=1tkD,n|f?ب؎aoG:6ҕ>un ߫z/ xZW⸜N#,b TrTaM*9K , ޑ^;:ڑȄX=4טzȘ( #\Q2QRfq s.]֥H^gmmTe20(,l y/X;l ݁C ðFzed:;+(ZzT>zTL)-w%*oy8m(|^Fm ].YW/SQ/EK &L2NIX$(t1%TR*%Z8P?p^$)2v˽GzapTKeg<<{y39Q =SRxKX;mWWKq/<0oȨ5.`t]Gos\3r=~iVF.vvޙy5f^ɖoW{wfq<8t1 ?]o)DCw-i3 1iO}zE*˭_6W.n\E=q{(hT>kWpj?"+_ {o^XM{L-qNy8?Tہd$l(U.*- KC$ǔ!8NBIt)2ڲ8;T\NFpnү.i ^*PX$ y6H3RU1$:UUZymtTR:dQ̞*[4N)bM>RZ;?OjACHB"Nj5\7Lc]8-Kev='{7i9aA72:eBPڂ1t:!h,ΒAk iC46M~^9awߎ-6h[^qA;܁%š;C,k2v vfŷhY@6s *fWBQ<*&"-?u-`Ci GcM:OZ0-xk"&DFE+=11Ik@Wqm4v.oMިpW iZVÃ5ZŲIolAR(p!AZ1ނED\)Dw9>A?Z}ur3&}22rT*KHT ƹLlkxo}_]"~>^ū l1nTiڕ8P)|3ӽv ]$5^MO7|ݠ 6gC[ZwИt `y.aA9ҿ{@=ary˲)_zxf6i囂w^m ڶ<kȖm<itw xQ'\K\ t+Hp{:bﶒ- n{WT)9{.ï5m=T Y>7NEv(V ݈R9%YP<LS<wUք5цvk_վ; o<-hQe?mq҂o e%|J4˂1p牔)e^QE9Y/("8)l}Gv D 4|bBa#{,q#l 7NermGeG6uWu-KC^mڛC!F~q΁G՚2-ㅗj>ybtCsD|FWw>9D}(rl*yo<:L3s;XQBeq}ԙLZm<8K5ʟbdVo<Vc ,Ubfu.^Mqf/Xm4SB\bF00DTiDjG3,YnCKe‗XJą3' ֌ɠXs ms?oԳ^cTf`髯:fb^.ٮ-mٟeI-+˲ 87»MZi5;^yy,-ҏݪ Ê7n珪Oԃ7T ,5|fv5sU_ ޮ:j r4l\OKPA |y:5|O{ /g(i!Fcyނ+jD`B8 <+H@/zדz}1 =1䝵4jK"~YIe\vE) 1c1HD{Q@SFEǭ48ͅ:TU>w㖶n|p:?op sm-(輜z0kXjBXoAK1!9`-]!îZ(B(9dWi+ru0 UPBk;]qf=zJp*= v1\E++R-+I&+? r2O/d7pG2  dpcxE\8l~Rc}$3 EpG hJ3ڷhXrbl?{WH0/n< @?L02ݍ(%nˮ*`|谜,l+"Xv6{65#= r{ 2p2;P0'F=zug/gYz<ř}4cҞqp w74wx3bvW[ӵpxnlߤ6ab鯯V.v0>q38DMx7X|L;e* ˋQsƅKT_v\,<Liq&7=}~x00t=z_ۜBh!j*N -i98[RkM/g42E-r@SW̳wF/HvwBj) A,HglHBi D=SQ1HYNU~t7|e aj߹{\pruysܺgw^ԠhRR>t(F/AxlqȉADltp (#T!bQ1Ȩ 2F;%(swP%I>IG69!1 g֬lulˇt\uv1w|_ݘ˯Gt5*; ty5YvG_(.ije_O*S2L*Ϛmf LX9Lؿt#4Uh/·Ͻox>VY/03OjNh FK%,vh">uIޥT ѨQ*m 6NDRuK^YȤGbF/K"ChlImAtV-jO᭕g~jd_iz!cs|MUcX;5k|UnLأ%)RVG~PȠC!$)5Z 6Cn|AFדz@߶U0b[!x@tJ[&?=Җ,vI_>| TqgL҃ijb䏭7?/0{n= 7uzm;s e4<0[ovH4B||o1#BjI-yUͨ͠X2Li|':u?n=msx6Z78+[Ub}AjuXp;I=͠9Ԇ _/[' ˠcy$RgTj&~V6O_Y_~/?K޿w{qGt ݅n߁G&A BSo|?NRMu3OӞ ԣ[y:=zHz_3-<Ӭ"Wvw?m"rzyMOTw5lӮr+v5._:˭qz3Yȑ=j} Wل[J/GwӲʗgq%)kTEdSkz``5(h: >b}1(S^Izmk\4-qOr(" W3ؚPJ /IQed8WBv`N):Thbw꟟qDyҽAWץGpU￴%G1.O1W&@i|pB-pPK{?'ThI %ڐ} 3J۫|/<:^-s%#^v`<؞A瓖cp㉚ܫEZ׮OG@dMQN'9 cdJ ^-FeZ;OӆEw0'.7qRj\>gC02l8(A9* JQA+SW g=Gi)OdLAkoPic(RȲlfQ:lO{YpyԽPܽ:ܽx1]{-bW7 q?-/LWZY݆"u>=$T*ovF8"^$Tq$He@2?  y?T2ZGQ#ZrRTca.>(R e+R̟kYw6K(e/3UڎeeEخY]@ /I<>M?C%6&P"tp?3RȊ,S:< U Y ҪMm1Y2*'/ <%Q ]l|CڝQǮR;Kv!RtF+ jfBDT"\L*~lTo$Xc";)E`E hEYKP2R19 KV%U [q|sZi=$i{XxȞp3VDmY~@;=l+Y}+[ikvV?H?]@WaaWBi"ĤG{%J,8Yp9$ F)%A)C1!*ed5h !,e4d=z1=wI;7`Ο`.8mtnz o_uwk66E@h}յ֏nrT:nt;Z^_=OxE qlrɆK (Wnz|[xs峛[ԼTr==\ov}(1oC[xaY*΁ΦM,F&4ß6΃?ܖy0 qRNߎ!g |jnn')$ h*<^ԁ1T 1ZmHĂюa0PISCKhܡh;H2*{") .{uBLS1I'aSTN@$jNh}/Sd"焑X3+۝ug GY0fB:{6.~B0)Ke"T#IST1)嚼<(S^K> =P SrX[K(uE~8X& qAc;#l" +!QEx ?pu!(b `krg&1XjKɢgp1(KFm23pIfKS׾asߖQA[pzs~́ l,˔vJW͗U@-/Z +ߥQPrR{|d%ŷntn 6سN=HkOHK /΢?tEŀf8a]t B!El (%,Tn bx-gNq%(,Nc2D(ڦp*&2F+k E0U l6aLl-X{^w>{my(Fab2*SŠ˹ EkF{yg+IJ6Mk]hKr/ ^+I{Gy͞P%9br>f3 CDg$R@5ik(g1Z""t6ReD1>ኵ6!i(bLI}i  `IrιTTZtm;́,;"} KsISd_Q⊺Xd=EG11p +o&=/wWmVŘmYdb} (H[YV,yg-jI[R{i6]uX棔*!^۾<',9u,B⃋㧿]8N5_?W/+d/\3IؠCĎ{dɷ 砍ZUD@'@X+`K`e E8?M&aVJa_ho_ |=*?%ˏ0SY4GH[f 'inGD XߴmAAEo*` D2}&|myϮ?Ux?ux?Yxh=Cd68Z ePAF x5%ɜpK)͇ptDK877+?o6^I5m7G7O=mm)V2G|U+kD"(ЂIϑdǷdVT| qT˂8}Xf>,D"> m 7`{g$/*L($Jؒ&e]Y@d.D*5CF̷QdiЕ3a Zl=iu8 J3j*=~m]u0+kY7Ή7EY͠ AyߦU,Kw !Zbw-R Ns)p9=27 fsͲڀՃĠؚZcb./SH﫽P;EԛTj۴yvpYmAmX F uA~*WÊ68QEISn+{c@DF>VwTr|ٺ¥G~ҥLjm֙u#.L0A{`ݣSgocJ.Ĥ.[[,Ͳ.UZQem}ʾVSW-sZށݰip<8i7 pyCpCJ<9/sILkc sgtw'>%)1L|=L޸m)uxmJ[3Ud!Զ:"mZkB z݇ڟ) /Gp ֨?6\!ލ۠m$(͜E] NԌ+O|;|/;Iv"*4$f7)J9hVfVR=y#θ܂d!e*=Z뙇.ewz!,[ŋcj_M3ۈCvQ[+F5G-#FFn>]vؑ 6ܠbWy,KϜu˲x`KQ,9U dYr]l(ôʆfxvS]O;cw+bw8f'9Yᰲ}8 seze޻$L('1D.d#("ҖUR%m׷(Jr}.UcІ@f VFaULAZk"f̃ȍl+Znp)*ゔ:LB{K!<Qֆ5 pmpl\]$2dVS@.$@Z D6+JT%. :LL u!r!rn [v]XH\y EBNukj.q;1>9(ODhMXO#A9O-cr&ƑKS,A]e~9>;mt6 ݁grY48JJ*Td|DB(}wU =Շ/s}<2y *A{4̡Ĩbvb\¶SA쾃Ұ,GΕh]QeZ7(۞ii)9zR?{h#7k%iݮ3R dp&,/y=zٲ*!bO]BĞH6'`MF`h| OYk lvDQ,c=b"Y&}V{C˳6OsjƷjd5 ! ፓz'1+0dGKGeMвo;YO3w6B\mrןz1]}al෽}s.+>PM2`c?LM2\?JM2Zo^ƨY_3& P1ӻQZ:bHq+GU娴n/s"sxʕWA_eEJ\6^ ևj>$.E#8fPRyDΙL,)ϸ{ge~뫝z5WjI'(7Te ,׫׻VO۪)-3YF;[z8/Dh*8T&7\VkIl%) 1E{㎐0ww>F7{|M>zy&8|#|ȈX 0W,6?m%;xTDa8L+ѼJ{sd U _L߃,~»z[S';ne9Zϐ;62By#F0&d YJD{yB]:q`Gu=>Ӌp/2~oV=i=d#9_+glX osL*JQWֈăhA]ysssV+Qz]ĵ6s~s?qpQ^xor܈|U:}e=y}Zh"sGaEe=nzt7G˻UhQ;#*y Nͣ)p9=27 fsͲڀՃĠZcb./SH﫽P;EԛTj۴yvpY [Ҷmqu]Ѕ:ܵ_庴ʭNj]cs[P=V4#X'uaG~ҥLjm֙u#_F7L^{yꨔBLCE=q#.,;Re];סk5H`y>7;P5wޖ'-F?.Jh8q`zH'Et1 i`,pT9d S"#ޗ8@ɚO;MI/O"/2kMy5wF8 ̓g6AنZM_:}pR(CPmx{֨?6\>G}omdJ%`+өїcS|Iﻗ!Hv(CD ThI.oR.T\^@Ȁ3# Y)spmz9KYAKdVo"˜3c LM׫}v=V[ "2I8%銠jY^'. 鸓u@&kM5h-C \:C!(\NBG#ƚ#pv#x#p\-k_}~mi65vEoYɲV׺e_*h/ޝiOTB_t69If;xVVց;E#r8}.]7)~&4ff-jd'$zWj,6yD Ҳ#,S}`eŘ5dmĻUو Qj'S Vq8%2guAJ &%=C2}nm8{Orޡ\Dną vy"ؔqyy4o%eeuky#<}rU<&CHm9!CR6(e2rsiVBQU=LKui"3> kVuªhu\+Mˀܖ? $AUfj'V<:Ӣ.sUsu=HP*I$q AȨ\ ZYc%z?ODZS}3#*DMyIQI44hB)$ R@:$[ɼN^qZ5 1&<62H l>XN 9FhgRE ~-^" _ORvT|WF)9|ョ]*W_?HokYD/)ײ/~#\x$S\5-%wdq]^[#%tF-i3,ʑ̥!2$FOȬK+-Gau_Z$(ge_;@3K7/&0_D-ouUj0 V?&lЮl^]ƽZ-~4s) ~zRgUM(ŋg[_,#:w/ٻ6$Uwݏn"'g`7ANb鐔eQ05-tW?WUgSbP b.ܿ87[iovH ?\M;. 4"3.M87,-M e'8]4Za !t:Nckb:wSgܣiqң^#_z%LߺR R܀CD@3`ɸkP`TBɻُJŅJŅNŅKŅBefptkU0^ Qtb 2|fQwNy`ࣱ&Ft>!+xkf& dU#o'5F!j'HjQ*.lB}^fЗ+o\2?Q^c)ԭw,2oRH( Sb;;֬WA}u2u.E^HOs7W.lOCZ"8G8Q+;,p`tN1zP0Ӄ~hKz"{CgEe:t]-PVZ6̤mN{?N_ i姳7?/5Po&Wx*|OُqyLbGeoӮj F R?/1 ?O`o\ph^sS]) Tч~:*$*,VXci{VSNNsk یg߻hEs ~'1rTs{S?uK|>,s6~z; i X>w5Ի~ +h$xLEwkZѷM~:7 ׃BC':%\t^LvLؼN7񺿯TdWNoiJ; 4@R!U-e=7UPS\ ! \T\HPӞBD񖇈bm!x+BDi#yMBWHhKui *EXu7dmvFnL0ϩUdu^H3ځV C&]-Rv6i?&OgŔ\JZea۔.} {h1s#w;+U^+G4ߡP*Dx(5iyN(64AHQ4&-/],?zγ)TAau@:x&6$/D ISמ$vk,GmiTU7*р?ܴ2Bcd״!!4hm),Z!iMNj'YҿܘNi2/ >}ŖIYu sZdV2o& 5 ")J9dIyH:i2t!&+ێTqZi紣=0 Ϸ$]"j, ?kwwl?QH4d\I 򈮾\}'rTrPkuۯsL3N@Lk{DF~<: *]\*U9+$ F"r-;BZzt%Az 4<8hDp4⪐+XUVBrdN\=q4͏H\p)qUݷz՜*Ԣl*TZމg(4rmM\+ΖcEγ鄉@SJ2F~h84}oxUSLF 3˥wFc4l .Ԫ+JPsӈ\3uDlG &r ,JvqUg)VJ,n$eYN맿}.E;>Qj<'%?W'QuIT^|2DQ9KYre*%Tfٯ#@3`[f6~X ”?TOHƜn7_zMe˕1vm"x <1aqg`2i,Y9/RZE%%#yH*rU˽~br>+ qek~~ݮ, xnz o_uso\KڸaUZߛ "`YtEմi8 .ʬV~hRɊJl_›+÷y07>gmY[EoÞ7T]xHVaiλ?/[jLu!N?O Ԍ5UPhQ5Hɐ\PJV+\U>!dj}-kfҴKSeڢHj3SjNvhsڳ8gƙh5IfLa޻}~t*ہepJ^vh%ZY)4$-@+r) ް9[D_K&EYQit# %zk~IQ_onb .UoCrW -xA LN))Ē b:ײQ^aPK Y 1R%P 6g JclitN+t*{a0:[Ym@#(J#/G\jhIDg ؤƗV(="tB#7+(CbNHl썠M)p.[*rtު .*wzaTAh{<&X&J%C2UZMZK+1c Ӂʀ,gBm<p *ac-Q %|KkܷPBWVb[c ́Od8o8x4>Xw)hI{ KfVI)}ꊔ\ f1xunm!16٦\Hv4Ѧ(_ǐA;qO:1>lb,}BWFL1@FG<ǘvUtdsޟܟߥKaxՏ= ߥAoBʖyM;BLu ݯik)HWMtn~ +r\>/[l.[¢ogbҞ~ߦn=NWnz"97ugR^>87JY$]MK*͙@+Y;Kr]zԡ{ּ_pCc1hd,9De wd,rc/!KbLE1#,(BPRhˌ R Rd܅5FfcSjԪW >V [ ?jdRscuܦJ&-FE-FM*FrFd& ϨD-xYTxDl$9!C_^|q\>4?PPC#*J#+pWyQL^vjcP{6{6{6-Ȭ6.;R Qq1 Rd Oy汑0Pl޵#Eȗm|? C.,063Xఁ,y-9+Z?Z%jn5dUUՖZ8zc=;p{pFrޞ^SysL&ʏ8:\n98ѣWaHŴŕWR-#^V=2Ҏ2.~?{>XL/}4nP FI!ֲuc+~W٣K P7}8 袑F^ϼ ,K*LjLgFCXYL]/"&ZH0<GU~vdz^ 2kmC 3꼏Y;{nu`KLӆiFbz~c|Nʛ ̒.+rS䦓rgMZ<i;gcu-xb=n;E ie9\\w'$f ϺHh\XPPjz2Q7]r,;i)]GԳfs߮'pVLas"39t\ǯM庤lM]O/w|Z#7W-x4u #Y>CkJ:s%=t7=f6V oe6hA6oLY))4u80wVMټTZZ9b}mk}-&Q^xcEKa›i$+#%-m8(MR Qc &V`fY1h˭X n(1%L@[S_7GO_(y>r6{h6u^WuO!?N.JR=ʸKQ8lzS:9>[9yŢBPQGt$߿{ܹ{ڕґ[Ғ0g^E}TjU6WD1(P51{d0ai(6DlTDYm۽g'v,naʌ][<06+ݯ|en'M&^MvX`$8L p҂Ĩ\;d UXQ& nX*oUyb@h@QAsglrD`p'ZC&w\䬗'@oFIl+5wkt#[|ڡ\W@Bn |X璫 Lj5:sԝ nklr0'֗ljr.`9VMn1;kHEҒ)E 0v؏=0ף>XNj#~`|^JG2 lYJx PkJ?ܝsLe/]; bT9cTP9qNR=c_AD "2ADQ"Ji,*H8R' >,u`r1 WolfDW +7bhc*h%{gqoR0#4'(xAcD6r6A:rxzfB-~աCMrWx@՟2r2@}fj.1^=D":$ ?VM]V!v Phe221YdѺ?NQenKhEji{1+x<=sdŭ0KhKrk҃Nԥ'쪒t%9 W^׵88HၱXiT TV0bRw9(gܫm)C}O5Lz.'U( j)K "gc{4CQg#Y)<09?PHf}^ΟLӃ/\uy?,O+*v/}{H4kb:LW}:S?]͠1Wc VA*& ؅JvK,UxNץI"SeMiAua6K~0L~ۥ#L?19X x#QHY5Xa FVc&ye4zl50gD(~J~#|( B, %/R0Ţ! RS,h%<"%`9N`HJ]gFGdXXmZv\pqS{T$뻮lgm˃AؖprsW[b2}ӷ#䤆k=ʵ7եK3n)Dxi |2HkE",2 Kf ]n͘qr]3Յ".E66ԗެ'1Ʈ"}zTf0}_56NS1jHT1 h*m8h! G*1dT`gR%k %ImR!`V0/2.Dʬ^c/9]ٸc[-3kmYv`Wq<̑H˵VHez"/-6\F6xE6>\H☁ MCg mA9AIt#QirTFz}X;I1vE#fm5ʬUшE#ܬ$ j-Q۠Tvw`0y")5*C1yw; 7QHb3[XZ!0CFbbɝA0 W:qɶzQg֋Ţ;WG܃P̫޴ލ'ֺ!Qu>su}Zfn}WI;PzT#*73,\"c)7e\rsQDGJj.D%E]BuExd]%]o6r5&Ǣ|u BcRW`ΎF]%r;uJyg0QIJרNhGxVѨD.ǢBt]]%*U^X*Z,\|ݏ65\5gK:0uN!~D݌&G=&0|hUgQ|`<.?n'f }ǐ4cXdgǿ C+RӍ\٩_BPǣAIO8Ze,:uuῈgt㣎饘PtlCkX^*N(5 3j]j!㼥WoHH*`+搮,beP;Ͳe|;kaVEҒ)E 0wQ^x,`߽&nd)S8v/{olJ`RGTQJmŌ6>Vimw1=J8'bh suY7,;ҫ5Z6 m-ö` ۂa[0l m-ö: ۂa[0l mхö` ۂa[0l[0l m-E# ۂa[0l mыö` ۂa[0lam使\FL#^q( ,K]ITrr5I@K're4 h07UӖՔwN 7)~XW6JSQ`o8)Kg u9aZQ}{LYg|*%-[e[:"wem$Im%y^O/=`1 #3"S☢Ԣn7}#yX$E%J.-V20&QJᣧA g]Xw&aI)fP#:?(N%!.h IB9{/@"o'K cAIɎ폝MgY^]uE>l)aюu_dR0+&0x.>DQr}w^JL`YEW$\0iT`NJ.#"ST2)"D@'Vw+2W9Ki|@"3q 8 \9w6CSS(JpJRƖщ`eMs:503'z`Ũ) 6y mCܚ1b=YDp>%8yŇG"e"TX)V:YWShj9 vyLEd2Uh=uFf%c:ŸȈ%Q 6I \yx:c9RypWsLɽ{0Z*6U=HA=9"-^n]n|W 0툦2>貞4<7ܮ@yw26oT wm~n9O?@g#`b#̮Jt ":>| k`ya9z{{LM-+Vl$ #hrsSo^y%}{cʏ Y[%l3j@’&cbY@2t)&otB-{f$|ª{9Lvcϓ [Y7zmnz{S6'RyFI%uLR6Gi4Ӓw|XNiC\!q\P{ \L=z@P`tZY[Iu欔V-BHEVqnKk:L$W\H[L*cQ)9uB́Mg9%tsOBap9}Դݗ6}9;6fu)[Q( y>=|%~uKkNl1Mo|I& IGdG/KsRf|H@LJYc,hLdI&&Lm`ntt1ˣh%#_޴ 얮]wWgn6y\ 6w]{''< bV nf7w9ltf2ͻ9[6rQvn7mo|^SkWZn -n߭yz{9|Ka<+ސ@$MwMl~ϯӤ!jnnޥn.wF.OW7E&MƌpށN`=Q&1oDɢH&szK/_/$ Hz],XK :$BfDF@V>N|1BY7uIL2v@0epr΋p x=}᠏W/ ž4 ~lxbOVU[>ط}s2ߦ|- fl; I6gU(r:ځ|}C( &ǂj6Sntƒu:kHA6B '*ȳPA['ө J5ڣQVEKME ([ VErJZzб tv+_ T0m)v}.W|=IӞRQ__nҙd2l@(o 3NPSRk2>hAKHJs2>&`VH[a$Q($F4$YzAB0.i(PEm,-xRs/d}"sQ zm2ejY#Qg*JaAF'+uP 3Q Mk0JCΥF\^ߜS>ZEO/yޛ)@":a! !Q`E sV`e J ,x҆@SzQB^[#A+%ǛzL-ݕ 1\гZ/,e Y ŧj-\AGSf-ZxvWp`hۤiO#ԅ >:Zji꣨IlϬ|"m)W}C1Z=PEJ3.Y:7ۭܺ_V7<]~cQs~`e,)"fbM` ET1(׉ Q,61H(,C"˒ !ʘ>c()T<*_r)huY(btsZ,-.goJ*9)f^|%h"k :_Yg`U`eShL6DW"Z |v=pZ1kY{Z?C:cVfox?MؙVe/8C|RY10bGkKEUIG, 8nmNaj cʘgt,!cVl)+Ě,sgQ^Uaw%.dȺ%Mƶ2ǻ_?Ćrxˇ˝rJ8l1eVq9Fr8!%,vVh! BAI-@.0CaGlVsq^98/M4,5?Gf|==ń{yV-Q#^Yw!(Dm@j̜Og)Yo9bTGH^d'W!1RG.E0D&WZ0AYˆ (kAF) J)2FII3ROxtZH \ < 1U,!BZTO!Q2ׇ(lY5QU*bg%Q~S/tiIbƮC6{Z0? M~΃=xI& f G4ia~ dD!rYvp7]6OSS<ӗ_F4CMtU ikyS)Gn>RwjC.4{-7,vs✏|Fյ)~[Ӽ`ּש'?L|w{s>*0{<^d.q g{ޮU>2-7w6Bu$GhFD03-T>]xuѓ1ÛMsuT%6XdۨmkJN#/x L>$.ˠSy"3/ȿz,S4YgWLbq>~wI*'ށ|(M!gOO`Ɔ=9C\ j߿_Q_S@uES֕(<-gUN8e[PؖG?Fp] -0]zN ].eܿ>W ~Rラ0im6'nHF>xZ<;WH;UG!~d,.DTMc@QPIDz%饭 Lc8y>OX9Pd)=\D`/9L(+$ù[pSQtF{w??gs\;GѴ(F4=Xt dZ׉_M?h8)4i!SoФcj<iĘ\Z^Zҗֺ)("fW!Km3ԴVB FFS?{WGd% "3YUֲ\p$bͿ#r"B J8p<pYG?)n |=Cp 4]R2=~%bލF]/O>O_';-hMEGƗXᦇNJ>T$~=,v];GCRw{tjT(Ց(5֧T=uimb?*,Tdw1bTBEVƚ†k+|+L63.Y2hNI2q/@F\ RH-3ȝN\.y91x@F̙ZF^j'[ChsTc<b% %ek{Wn5tUfcWWg6j㫍6j㫍6:P !]$kdM 5A&Hɚ Y$k{$ Y$kݿ 5A&Hɚ Y$kdM 5A&Hɚ Y$kdM 5A&Hɚ Y$kKęw4Aɚ Y$kdM 5vdG\\Ql}1/d"3,_ ].5^P.88Yo"N>ґ_W^Ř3}n^`qJoYzYsk~ZUZiH K} ԲrH`؏~ל5O _pd!tL$ Nse|lsمNx*\VTcnPޱ1ZT~yduՇZ]˫dt-o*ϮyL7q2.;<}78^MMeyI v'6'6Ħ~bOl,@9>].pW/ϮO>v%Sօsc&G-Zn::: :nUop;'#6Y8q0ϕJ\dm2#H ̓ 8oiգm3< =y;2/ ƙ4GL)r*OF@,[3()tٷȜ%$A}L:;Ik^D>ƍR姸P{bZ=D٬_"fu9f(aTɳ¨h5è֤ Mon%P}6/!N}wU\QSN竮xeׁj˼~ {d^2pBb"Ē@8lT%%4WEFl.!4 8̒ &A`HrV&4Vg՚`a\,44,<([T8eOodQȻ7oq__ G/eQ) " ` NH4KV &ǔY2bWknĆCAj㹨m+mϬak: FfAu:`@'AY"8Yt"A-YL,Dٮ"mFF%PxX97.ٸNCAj㹈*#kqVDfL٫4=pRCrc(Ol8SJu]ow1cI+}6K2E.8ѩDmV"@PVU$R{hFį}j \w5$_g^\\q7\lg5D^:8d:gb,"#,eJO!pq(xXw<CL@:.C~eϑ{i7~ NvEDGT\ {;CIdܚm~$pq)~|ݦLn8^)9)Uv*sAh?*&T''uCKNP,gH)2z^{.,CL (#2ۑeAqY &&X/yD2<$PF t:$]9{//x3̔u~νJAȬW`e,Z LAkmSD9Ah|БS[ Fʀ2am9rew2o,@a%deR8MHZ ř0aN8LL,h rC,%@4)2b!d/^WH|j)99DPJXO#JytJT Xg9kH%dEҲcvN3Ky1Xig `uRJT#;wy2FmQ<߇i}\Mԇ\샕ޝl:rRi(gVZI1:@΄VKvj1ڜJJX6dS,kAxQ'P1 *KMb('3/Шۢ6e?ſ2^jʹ'fRqyuT>?m)v%D<08(6Fz, , ,P}}WX;2I T1Y#83.cQ'RQyc$TU+((,9&,6 oDP"̭!#[҅5gOU]̦wycy`rCS.>OL<y9hѷB/̽> T sR b杄HYX4diQXbQ.!Ae70 ? 0001dTL oBlJ\E,sJ1H%*&qkN7(zNL>~#T!`؝csJ ^gK؛ebdd$.NEѪ  t A\6^!uJL xtv l<dZh6fs ,F'[ $rYP*fQ:Szߟ=otl;qzdx^@"=CK{IȪgP ҊAfN'ڨ6(FCLR 4Ts2YE9`-`!$e!D#o`vn9+H˿cw5_ܦĽ-'ȻD;r/]9t.fP{, Iˤë ^^Hu~wYc/2s!fǫ*w;t4jbV9D"w! " YΒJd%J9kEaB)O[״ߺf%^x?~<_̸OH'Etٻ6$ .f~?w{q1b~ƷŘ"%[ o 9I|ő{V8FHo03"A[nMiׇq$x;z{^@?E_N~4[wэg֡u'(gŴqBW\Ɲ-UF߽ q~aJ[9y΢CS? dc E7zyQG*D!cP:NGc4Y"6*c~c7t)5Gzp%p6)ﳛ!*ehfaj8[jNfI O^}&󪨇o?,9&9Frе~{vS<.aZFyo`%E::LL#L@Taڧ]XK'i;$pJR uWaDu Ƃ@\Q_*IKxmԋ+Ÿ\ 4mۛ1v>~GɠХ<sݢ_5オ',q^f6I;MROr)bJ%~;a4pu?)ԃ+r"Gw1ǚ ?p9AT:\%)< +9bGpcJ J*upR^M #J+q B}$->xJRR|W)Ԉ\@M..&kP* ~9dvӸd2Kr)LUdJWrim.h9F Yi/'W?] *qò{v Ja U}HSQeM&JKkQJ' h7h]pơ.qzJˤL~XTU]:F(Q,"4$Yll'I9j.zWQۿptGXP¦'@_ ue7 wOefcCm[k-}L]l+\*l_guoYS7@?y-(͑φWH)%9k#C6lA)>4**-7Kp1Wdq$-ŒG."g*7}C7|.!ɵ^03S<`s-:2YW-z'=(&W+n--.R>Bc_fRG,ڼE"S鹈X`%8R6 VGy=fe͗jC;ٓ(kfkYH ORMΡZ_2ݙMMPC:sgN٩46T6Juan:mxc!e5\ ǚP~S3K1tR}u:" ]7wƏWop7(tغk֦u\M2 늫mvE"gK2#%{v.IvѦN:PיBcXfQYߣbSui!m,vX7 {Uc:_MX{v=ꦀwDßo.@V^~q=viN2dt oqw`,b+vCFV4(#"QnfQV͓\r+OuɁ_"+O.{%iꆌI8uɽON \U&)%IKU3xp֨Op7pպ/p$B3HRiQ/4E=Y=ܓxRƷcd}oZ;JM(.^[Z'`+A9Ls&5]vK,@UjXo1'ߙ'hW2奣,MzIQ!ͨ9Dck,be\)is*Kl^N}Z nr:Yl2YlyX)0ʇ;_~藓07iˡ"˃6gFirXU]XTT/oZ/Gjȳk9`Rw_f~~fKx7y[]ؾПG}?hRN%He5\$^c:֝YDBklBgWWf|28;sD(#bkd2kZ@# ֚ ^viJ PfWg7ͯ)(]a9$$V8m0RS-dERc h'^q<7iIJƥ?*Jǂct6h[1[``hN-U)N7ϗ^BW?,^|R]įw;VznӞ$AK= B&/Vg 'Ҧ էGuoa^gY.&#\9y5[a7܁=9,h!W$oB OQc*ґ9FYJy>%#Gt:m+(RR\-Q2:DIb9ص6z(cS8(jK oh(/!tCNfKU RYU4c flI)'dG*˜I&r4Òro*#OKec Brmp`*F(=SBJc<2 (?X@\hcdcx0;Pmҩ|߿;BM9+*_j{8΢9𧕓pY-Ƕ矿ԛ2c|.>86*:M͂,DR(SǬlrMս2ZU 70>P|>-$1Z2zWO.bT棢Y0^7A ' /A{<2!lxu?nara9 xV5QwG[ܳ1E f>ֆk<j^jYlO"R mB٧<3B?*RDXLk:pJ>O/@:)"0N)b+t@Hm9HysQ b9&U=<͒CPP_TxXtw硱եSJ!MO:7M`Fi+bf7߆N.Q޽=ojl~O>ߠjI?pjV˟y%+WoOI,7 <.,Gx9ܝ-ZSz֫}ȎH :v:77X f =7O Σ7$H?iuyNQ廊r7MC,$*8ʝ7\ 3f4j|8Q^1Tk%iXr7 KJxV7yOںhEj[-|&kuOґ= Փ~ s{ JN *=UKPYqg}Zr*U瞩gvstԁ,(cL10)8lvvr&ȇ+qs{=e~QBZi3ʨ]T`k"BxAtrRn&x92J#"( J))$Ջٽz@YǨ5uu|VQ/&p5Pu,_7:֕͠||='5ŏ_G襠)?z KhbnuƓ#|$z^⃣%;z"*(ҽ;LEQpEn,! 8sJ%3@?=31N^:ǦL:^oTuR,GmfXGW\pQ334,ܾt^i_t񬵐[E+&c(3UIDh{թ֏81mt*r(Ct^^=tQi]f2YV%s(/}ۺNͫ»+_[(]`[>ݾdjscUPzuGބv[(mzKw/+߰1pe>Epϻv|EoznG߼Kߜ)BR#h d"EnF/aoI!_l|12 Y,p YE[YZr<+^,;jI)[IjU*V5 :XTΑXl"eMbQ(D\Vӭ[YgFr˨5 ST"q*x0Qe1 SƖe:i7#̫ 8>1-K| %<'YE}^ק> <{}Gw=@z|6'L WI4GDHF J @zvH!|Bs gi2n3!fǠ>J:z2 cRK!g ̸ wt1%N,Ӝ|rQUB(%]R–%`Ȓ:QoF%08#[҅ugERsu}];)}33 ]ayLf~5.eo~)8a !L83dLd;\JC.%Y6qk ً;A"S1'``V S2L)-WElRd-1ķ[d!`9wW1+8"&7X<+I-AW%QY6Y&LJ[5FyUv(;%%hnh}ڃH&Օv3rV0p]Grz:h$ 7k`ih"&ԳBnD->[v:R:Qr$xsvpQqlcb!V"xQҐ7L(%Á'Vባ=[״_ߺf%^oxvlo9dLkB"::KGփAd@oK+Y3ѯR"ɳeE}qՖ^_[=V} i~!J9S\H`SC>iF: D9 Q946R3q$2 +C' ܋2t'!;$ct lY ^āu6H#f\f]7|.I2]Yae#"U&܌ϧY"8sQuoGϘh (dItEPO,Jt:B&U΄!sn[Y[oV=c:!EG_`M/x<];ѭOV0j]Q16@#,> 2'R:dmĻUݒw$ZK4pJFG(F(4DDy@Uպ<=*x'BX\IlKhw{N]g7ռu˩-lww 74oQJIq1oіw|De1岰LYƃ -r.J+C_u:ФͨԲ{3N;yI铰4^Y'J6[UPy ,%)RXe/E>W["=!Nv\{K+&/# RD353V@`R2)tt5d AڟkjDtf4%%)A@d^*2(It(0"}(<(R]E&[ɼF8fփԼ1%PXh'jcY|Q +.҇(: W9) (%i?Fǝ.ҋʬqiM~ԟU;=Q{?H2Ȑ,QYqG@qL1L]3k2/h4&8ofaHTNd.Ec2z}&f.whNp\Ϲy?]@KwzW}ޯE!znڕūU>9[uC9Mx}zOʳ e8(tх&N\=}isoOko{Z+tfm+, JKrQ$i@LR\$J)3[y A:yb*##{"ˈV /E0J:rKFU.V6ug7 ݣim oߊdAY\~H}ݳ6n{E~jNc9;cGgCb9ABLZeRI5A jBdQAVMrN}B} IO!IHSd>;BE=e@#)^ύu{.ϓeαwog<#Jt5Ի=aoLוuzM}O߇_t'-?~  ].I TP5j>yQk4NJe(gN{_RLwBn/U5e"I rP&F..dZd;{^"](G%^2Ng!2Q4&Ľ2RL!2͐9ܹtpSuپġ7G!5A!&LPx-׭Zw6p}*)÷s.<ϣȑӫ#CB [ys{_ejja: ]$1B|Ux/^ O`ž ,dLVt\ :>5%QօsL>Yثor?ugRGBTl Jsf+i$WI7)2nyQFUZĀhtG8%=W erQZw6 q_ {Dl^s+؃CDe4d Aq"iF Pjeh9QIK w(2_KN`4 :;Ik^ ԯQĒ'SDJ|yQ;Ϯ5}x]v ץݾWmЕP[ i[:&-{ILvKov+N^!hw牥3T^g$ڱg %EY@&C֡٨lK" C 4yWEFl.99hp%=BQ@@p9` 72pU LZw#c=R IƶXh*c9` :)̸cCɶL(M& ;c '/%Zst*]e'MdUYڔLƪ-̊{l46+ QyfպL}Ajұ-jʨm}`$Y 4 9*Qp6^ kBb]<ؐ\LMsmh kY2EZ,DCjQR;5g`D&"D\iE+3 \pRGtS(OԣjDΔ9y]gXE1j%rIg"R_X* rԮPR;UGQ3U}uVmqWg"]GEIp .`;ɑy=,q"D?6>V*;$>`"c:H‹+H1hM 6P>R@W2e'9SbW58m!?2P&mƖ/EP1xa*|~D˭Q6+,4M6++%K6Vg*( ٬^a6+aGtEW.zAW6}Dzt%SbVUA3++05]`Uomb8]y*(t JY$]=*hv F2^]ihq>6 mt,j%*B})Է0Q2u`/?h6],FzcoTiو3g N$q#WꎑVU H2 =^tY.tdTm}F U6'w,L?Nlri9~VuU[Uw,D*}7{A;_飠l&4֗W(Kg]JЕe\"0<EWuUB筫R g#_]S|qzgt0 EAhI^k'z:w0 \}+B+8:](@W39ڣX \}֘6XW$gFU7tUZ ]Zu骠bWHWU#ۢWw J]BR]dj佡Vt,( t #*ᨭ-m_*nNW^%]q!TW!,J^# B2oq+kFb\>IϛTV*^M) 2SG&h:Iwn#SAʚ|Ըw$qi?`H1qZdUH|MVrew~}w` wjfE#0#hw3*'(%!WYܱc2ǯ>(}:+?\۟}V{r%Qѓ3B!)hٟ(W& 57++>`zCW|Dzt5 7m L[;-ޞˋ],5Ϧc{$U6"=B5?}q#QcBf)SXL6օ4'WR޿?%>Fy60ѯ8ysr[>B~Wc砞i87!-fFf+M#ޒ͚gHmje\]rTl;R,怚!MbnzK\]ڻ:lB4p>fI\||X;ӏi6cW58MˌhFqQuZ0RdLcq2y 8P9Xn51@qR $Q3>` V2ȓ!P[fgO牦/Ϋrui5)9m˼h^xqglTd X'Ixpr4 ʇ H3C^ɔAzO]֤H>lF|̇O@a/҇zܻ؋c/c`N3CkWR&;p~ѷˀ4Ыx6Z7OɟOGN1R1N/4˺я$Ybq9/^OYDL yh17{#7"i8}V̞5?,.SKЏhd{_bэ/hH'Q|_s~ &̱$яGuʸqﮥ{'ͽVg[L}e6$'+/4gKC=Ajx̃ 葵G~&ȨPX@ f+YX$KQfS|P2Uӑ#dL%aIR(%VNR AyQޕ5.4X94V :6Zþ]z:F ok? }k {ZB9:3^=බnk?d#'ێ9&g٣eIv6h bCw(&sreUUvWLݼGIv{X/-ϻ`b\[ĩƿc 楖ơPɼh*]< EQ aw%tȼN<1aqg`2y,Y9/RZE%%#3Tܫ5/v+1aٙQ >ZftZﺽ  wBTËtu 'bmu**j]Qzul{zN7mv{+ mRˊZ/o{;_>snyM;[{żAYn=G>L'ݞ1XMww~Co \fg~ Ǫ=7)\Fthpbe]'}[=o'Y,JڬZ!RF+R;W鯤gb^w&̜OPaԶb)"Ki@W;ZPỂ␧ÃfO"{ٻM>~co/gRU!&q3^\B>ړHvV߯ageEHMl:Rw>Z[Z߼nbIVy ~tWRF_A]%d3`]3igteH,!^rζ{wZr@D˜Ӟb9fVYZdĘK|сepJ^vh%ZY):-@+r) ntkpHG> E ]ޔƂX\iK-[.ۼ;6WJ|!9P+ͭ2)03Np/c֩@n7Ӥ. ]J\% HolH*阄Hfhy&Mmj ΁'7wMAS)qNe/ Agk2 Hx)KM#( \26 5nPtX9n #7ˍ|#Sd¹lE.y*PPW`T/)uL8d( qN4Z=#UbƒM*:Xe9#.A%tC:Vvd>YG-\^y z*d;69/\r'/ܻ 0ߤ) )HޥE9ZNfVI)}ꊜ\lI. psnͯz%q^||8c8 |4ĸY o[53Q!c?܋o^ .ǷZ~gXhoL YZR(eZ ؛i}IV"v=eZAi2L+K[]%?y sH*O*W>Udl`ʣGvoEM|L_$S@&e&'!"[mr,#ވBkT:oxh5P@2j XRФg{8W2RɮK vఆOga[YwK9HkĊ8_WUDꥦ0Rb<)c"  4FΖ-Ck\bSr=5Gw}P>v/@\{z\:}ku( (fZQhG" T8l9H m xbSsdaQR#Zb!`4+%`#'Vqo(k6J[o[X贤Fɻ-Rq5%cv Sl;X( Ϩjs; A0kOBsA(!ͷfg0zjEo y"'Ez fCgA4A\p)/xkUf'_?Ixj'A qqROtRRTK``ANU1 !uT6yF# 7&'c~BB_jQ#M(IDԣ-LVD6/5I~\ }0 Ѻ@9A# He| "2HiDTaCVyk >.X))p[q|xEN}{]!TO4_31#`ƛPmDj;.p)O LRUluW3A[dݢapNx'Q&U!ۤe_x44 .#&VU\l^k^xn['Z=Rm%1*8JK!(ǘ(E@1%E{g@1jZ==qtb;lȹ3çw>m8(MR Qc &V`fY1h˭8E}8P'QLJZxxr'զ;8= o,UI忊+2WkM6ZZM4 S-̜qǧƯRwn9eFMӓ>%0wXΜ۟QT}) ?v>@ԇ5ԛbOl#mCt}bh\GNֱeR)ujrf󅛭u藞6t u.F $u Ԃ_K(`mfafݻqF`M3M^rÜ9Q [cI"kCf{7͚K1bkHVׯ`?ȹ%_cmlၱkT 6ոS{\ )=@敤{ lTj!\ 6NJ[T*hp@<h<,ƙ^WZS$20s|Y 4j"'IZ@5SLAk"5JD)EB)Bǃ@ ܞ*K\̂)%b+˸Xſ/aSr/[^x 1wSх~ ~K}p@? h= ~j}vXjd 6V\Q2 Rb^"gϿw+%>(\hA@LpmV.+.?$Ϧ =EP:/ LV>S]7Ljz5+.wƓaTo^)+'eK2YܼQEܘ&w'4[/3HS ,-ю/j5،FFFwYocOݑZ@ڷ~w+-tylxvrJZ<^@wP*|!-Fs%$^߬s (WKM1j 5 tJy>%lKȳeњP@M\&J.f<+X] \f! _ǧK=mR,KO4|gܪEq*AvA8PE\b1m"I 51Ȁ1A@%x>ՉyӰ? )I1,rfEh€ ?Idbs.IXy,*9(HnSm,; =nzm(r:ytIPBe@3:h5e'a}ߵ~iAXHjs`S"")#n֒`-fivMXRy  r$W':Fð5֦f2\]|ˆMӒ1y5 d=z#0[Z h_e:@'y.0CV]3^ _[ V8gv D=0$~X+00E]\KA)ŒF/Fcۋyum[ `%C(X,IVK%sr=[]UaWBX|R PcL"ŲaWp N^!|j 7}%mjTQֆ;UwgD,\uW^M7\I1K{rn킓(LE8k`B%Cn$kGb|HuÐa$fYށ3f1~~:ѣxI];*AGdݨubw040`ң' b`_lPUbCúrw7]7?߽o^ uo޽y;w0"#& X1z>u30.//ȥA~~f:9iyҙ&u_n(s#j#Ǵ|5!x+˓&CSŶ&ߺbmMr͸8. n}!|H1yW%ϯ ή<оXk;P'$*'4BZ 6Q0M;eL AQ%vS[#ކ)ymL!%R+H1V40} ;":WKku99ЍN} >G;'i9;Q|)r^Ն[>MR&nWak );2[WP*PNB$dG;QDB=υ`yp®ET6ӯwQ!..d8*^H4U*MtJ]DWnrSdDtԁ#(B X$iH.f^81p V Or[\GzwBAd^}d1Īh=RxQcyA{9>u`'raO<8Qũ'*ka1cAD0gW\B:\%*hgJ|Fp3L \%j WJs+AVsqZ,pQ.>d+Ye'{)\=rHNXx#~l8z,NcXdaATc˻6 >QJ'r8NJr(\JKNQi_J jt65VD%mWpr)ƜXw}`p]Jˋ8^TG;؝z@j/_%k^˒;GV}T 6"(S8UΤ2.x^bіš?5o}sRqajY]d/ FS^ϥi9D"-ϹBi\HSdE& U@xɬTa(X)an1eXM@H1 .U!>jK ^Ȍ ^`Akm#Ø4 0< !MmNCIV$G7KTr*DEX,0:ج9WiFƱXhcXlQ}onb.}&&Xa_ CP*;+J HFu""h^eό9XG*;Q%MPU5=/6u`u i;C:dl\88:A}G4 j7ǢkڮGYhtɄPUE =Myh^h,5BĈ^-c]Ф5vͶk0s2qdYxLX r>Nfُ3Q_Y< j~Ѡ+l|1"{DܩܒA!`^{ S;Z ^pn&mV"몯L^gjp*:JTtB1>͚?ezՉpqK{Fɱbq]#$h (3:I &Nb2*ȪsbW8W->Z82 liӶ%#޷41xΉaPf'P%/SdG0/ث-^?&i]@}h` iJe K&VH#}2M֐Yf}xCz͹c58!%VKY|>Lnu!|y P|Ji.™C? s U>:_4K^̗XX dN8=9c<EK/uK}܈x!πc D?QǤA!ɶ-c<,AWegn)xYY/=U/Vëng;6<ˀc%B~ R9VQ4ҳ篥ټ;G=.V+Aݛ $/͇W{{/y4#|Ș?+vmz.*S1e8%kDlo͏4Ok,[eRq)O],W۵2jG2bg]QGe65]6H[˸x6qBp=|VEeR"e!5KBh ʅ`s#B[#ћQbYsFȃ8 v(InMv.t@2DNNADT9c\E(E |azP~tg`ESж jmRAQCBYoh)bTs3&|eB1T*㧗 y4'D4N^Ң/OOc 2c,)[~ݺݷmAvuW*l1Z1.\Vv鿫/=(3Y5ߦ/ V}(suW z(@ ]{OʛdVL*n}z C5(>K F[ݪH"UJ*F$#'\+u'f.%Z<[;V%K'(QrK8,؂E(mpRZͿ{ $S`{&|̇CݮdwPrrkҵ30He`5@ !  s8i.ׯV{nw P!,=NYA^_x}I?-?d*b% LN .c-;C,>4//ICx\N"f'-JPRx(ҀUZNڬ9gFq QY/>YSl4R%o,:MƲ:cpuD̂W H{SB)KH @c0,#Y00~`x˵ @s-9w9 US,z/22 GQiQ7%L=lFl~O$?66*?%ՌܚɅGʜJ!Z%+Tk@NGE)*,T\O?M?a=[G<Š59 $ $(! ϲP5/5zy*]0UulA+qeY/SE5_^  `hVgc6 ŧ3Z]g}YCɺN-CG PоFVvTو/:~.>;S3!0 _0#dZX0z]<t'&onҢ }4Nܦؽ-UC-Z,B{kbɞ:˜I_EAgn, F/tТὴS&j7?UQ 0~q±ᬊ7dpF_=<_DnII7EqIXd=[$ɆP4"dZqdHPJOAhzt I=fX#|h8ò'r7x[|66mہZmZz~Yi  S+c}[6u 軯ӼuyuNh(jZ.**!#UC1sR ap"JenG=H,ޙ V3;IQ&g!L$QM"rҤ,Ir$st6ƬI AvLfB ="8%%%qOJ _HxN%ղ8[g[pyvkó{P Ɗƾ<-gsSB×[%x-Ft,ܪ"OĀ6\@=ᏲYS># VbST:i%\ .k7J"MXgrRu, "m/lpk*.RpS6/Cr^*ͥκ!Ƀ%坨 :XA5f=? O/DZ:!$gQa㢔IBd^0䙕 5iϣ A`@'v i9NqbKV‡ѥRPQ;-C˘vޝ!2?b}C16_puz#/x>goag)Q1mg~`)bz=a>avP9O~M`f)O $O4nwH \ G~H> n͋gZ}nI(&8㒉S\}?I]OωU{0s*GR1 > ;mfGp|^|XO0 ޏ[=ESǿ4/ !^, sxvt%F 4{z<^Lk6Qy[xyqivղ`vЖ0'pGÏjo#dyo#PI3)}L'ifY&OjD;Xvrӫ9'urV:_YW :-V|HXyrSL򠰍?nj͚A ߛ O?:ћW߾)?߿<Ǐ{{#.ѫ{uK:qD;p {ܱjj5g̸Z)? ~r^$^j[x1_+8Nz_! N ΆX-iڐy7557ZZԜl3ns+y%,_~V?ZGas)IYhcH}OZPJ" ~89K nW狽&*ENa4ջMƋuvRzňn4*𣺒i?jm!0)EUq]O.N>GrdTsdxrdP@DT.{&(cId9gƜFrQarN~ӀHYXf1 Cp\)R73KA2's p@eM5peJnr?@ʀ4xyX-zeUjFDi砤%,Yn )W( :[Ik^JETD`o7AP'RmPY]9hRp.89슙5<ٵXnX莭IHu!W}n+ovPs@x-9Q2'Yyd"[d('zFe(vZ:*g4rqfC, փB.1ٜL9}ƶ9ckl*0cW^hZ慦hъH+M77i7{*?M~ ?96 I`xN(IZH\͇AH4 VMV92+چ$ AHQئ4 F%.A&ݎYr3`[حc8-]ڭaǮ\Zs^] \Q@`d<@OW%@t6 |ɊlRhfm`5W!% xMB@Q:|HY'Fhn&ݛf~X+f71v#rD2G=G9Z-̘ u"uչ:$d)KU9SJtNg9cƨ.G )D` wP^($89^u 8쒭5,ٕ/ba|%M€ *Hl$nCꅈ1%f%GŇ]ᇭaǎp9 ~k3`azS73r8w?sMEE\DVh@5rـ+Tu1`BR*dFU6L>ME:^uv+HJ|U0փeJ$Ǽ7E9&EIK GЪ Yv+"+e'IpA98 I'eGNhoYn -\h/Y.{+JǠ ]"1p*%ր1^ 4cH}6k{Jo =tցAXȑ+IkUE^Z@x44j)ݻͺV3y 6Dn@ :.$쒀|։NP1ܙݤ1#DkSh=d:K=I,ޱIj=uΰJW}Z{lYikQncPo|"s3F'Lϫ/ɉ6,2.JJ*坩HΕ`P o򝈴F}#捍ף]ߗ76=ˇm/i=E7׷"5'84^~7h{yt5}Fgd2|>:^GM0`)L,14Bwؼ ̊OƸ^%l77W` h2ˬlx]WwP,U)}IC34KF+*w{)'JzGQɄʅSXA;\+LKZ}>[hAQ$JUW3:zE>RNLt{D1BF 6Ѹ$82f/ R(DAcZu]rPY4~*JLRhPZ b[`RR(6]Kn1z̢ώ CNnkAN1Yթ/z˨vO`JR=]<]5Os<0ƕ0z@z'v̓ϔ-F^zr=>M&kƛ]2Z68Uk9|Dn(]ZZ Ԝ/5V3iN< 7'>IZsMOdJ&+ O L+?}JS zUt֒Y 3Ȃr~(DXVӭܣ[OXVf{ɠ$R>RTbA*9f]2wѶ⢽ɅcȕKZjJy4F~zv}E/;c~݇`/W?qf/,GDH0ӊZGP<I0a Dbue4WNpLQ#{'Vc EdըXTΝT1/g`HMшQ[UӢAfw?01tQ2Fv={7uDd/ēW/nT+_h] Me| "2HiDTa xkMޯD!ڽDO!U(7ՠܸ`|_ya@usٷh]'A6]L;) /%_t]?q4ɴ>) &M7,>\ a4EMPSlT܃s(STa"(-L@M4H }aX(]4E~ƊI>~C?E*TK.F"]d\Jv?.,tlDBI$H~Ix~MU\E8%DwYߥv>ڥ6nj$t/FoZikR b !55Tݬ!vJ]D4r_ޖj7>2MPae#{v햓y5'1xV/>2IU:{0A;%PDa92%T&;o" taX ,ݾT8#mj,QaŕQZuH F8ƄG)Ȗ))z;Kъ+bz`3 i3@<~qް7;3)5q`bfEܚS` Iq=5Tz]ȧa6?Pym7V#ƒUq5ro'`̩j,q*sSh*y`?([9y΢CSB1+M">Ӄ(FQUUS&=l8"kCFEtL{/ ,)mv+iV+:JD׾-.vI'!bUZPk,<`b4 +j=DaZ`1C `(3*h2:ڀmT 5)!ӲF8U 'p7>[:Vv3(6#|&dIQU/o $^\>}?0k[: 31CW!)UL`pCĖk [EKؒy0ᔒh#'89G#Fքl,yĈxYtl~TV7ϺxRx}Fdr+!R"eDBiؕ qyhh)4P/@PVEF}3aWBʵ&i g^^S`ÉP 850 1lL:ٌSSIŵfj?j`,k% q ;A(> ڢitZ9\RI$kerƞ&LRmA&a 45(K)µg5e5rv mz@P?\t&JzGf~:O\-~CŖӣeE3MgNfRvlpHP.1JL" 6 T ]շa+Ηx>Xm>Y^H$IE$s2Ja)5 1N?A@ݨ < {յ XF2º S뵡cʍV` )&B IIr*֧Yz=tGniAXHfs`ӭv$( NXK ӣ +HD`IA3 #=1ֹ}mD0k0-M2\9%pcR<墹MrX'4?kH|w5R Xg鏜'ة` r[n+bPwpuS7ٯ{S|ʏV;0J".aǣ($VC0E3F )(utݙ@:Y[ `! K ɥr5[ϝy0-x>30CWw>\gcS(b^UZ;aWp:n˺C*bha%lŽkU޽&e+cדk ,\TO^^ggˆmb$W-Oa..˹fn#p4i o10|jƞ-sz4uCefyBpT'$aMAompR7J^A64VE4C-#chRiF1Kj;8߇ݨ[ozqzK`o@:ޜcrvKqw0vH$;&VAszn]ˡ7uΙ+|E>K[q|,S`vU ӂyܚ`$5mb]mrI*Dwb#AqE-UpZ[Ѫ5q/Nhc$O%T9,Pxް >11E#9L?bv6%/Ct8x Md)XE X9"1X.,D \4^ .sxʙA^h'*z> K=dCyʍ #>F-KϣYah!Ѿ` 6Ih=n ?laeev7:ͪ .o(Yi}E~ U?cl>NK{ dg?L2:/2xwɼ(dh>Y:>)seɿ_z|40{b; ,[I 5$WGDHWGWqPy|UF(H EcxFI*KҁYE ["6V[AGC+)nз<][Ba[0 ^vkp]CP;^z4ƥۣȽOB݅O?vXUU,1G_R^eT/:*\ *hsȽ] %ƳIC~ȉR4gKrYvA'>SG}꧋^j! e1sNQ)N=.xkm#9_er9Rϻr1ngԒ}R!)%,ikU] hLY( ms9-I鳷M$BFIJysT: olR萼>-\ƅ7Lp#~q鷧_Cǣ/G\{r+[KEy*CE0N-و`g(8֟TG.%&E*\\/I0(ΔI1oX!kdl0^ſ})L}(Y쇁1ͥ'Իhِ5Eץ4g҈) I!0걑iF9WIFijBh&(RO#2-d|HA[ 'HyD0BrX=C5QI2o$8.xQ{_w:j\J?ɇcwVlWnjK+̂ j( I#Ƞc:74ehZ=\HKU펣iC)PO./.?ѣ:lۉ63MV&:ErS[%Z3haVm_uÈ|Z?97Bqeg? [=^sD:F֦p[1kg;ȮMd޷Oy["iKE9gE~.oGG!wÚ6s~9cnj=.p~C=5y_~̊q^y+_PR,Po qz͸~v3 hv6mWZ2s:V[e 3juÛ{fٖ>y.r:Ԃ϶O)?/ݲs$r%-sA@3!tOBxe#6˛xŐۡ DW3{={>@L Ȍ`qc7P s+#eB{\RhMQʄ2ġ,*ku gg'IQAqq9ts8+d2O7pϾXn#;tZܠW iR(&+U29$-,hJ" f !$| (š4!,qnu˧-W=Qt˽vt˽R^D$$8*%!@罋c40e>=׮[Rn *w/:O=zIB)i"HdTIIRj(cTvZGQѲ)\iТ!WMkT}You]z/D Bt}!]_PFEW(ӼyAfAO1LuwOw«x(WMׯrFwn*V4?ƋQV' rǿ0i|t5WϸG֬T{.j1mmx|fE^||4צiN6wAu@t~j0:,^iuY?̯Y#|Op_#U[ؾJ(*ldjR1GRcu3.z9|BpiԚ)˩ƍ N9G"V-M` JqxoTfmWu0B+(͌CNP1 5$'s=nơ:va$2#A  u r8b:JAy"蔀gT8@5'VF+( DI#qx|B.#"(-BKs< -3g`Rs@H՝5[ 6?^hI8ωU[h(OԨ *7 * /ϑ߽>?~ߟRfN;܁y x(uMGO'p=՜aX zk9Px1|7VyHIS_Č.uTMR>Uc[(aؖ6OoPK -F04%Wݒ.ʒKrjb؇|1[ʑn|P$P_(A~Q?&#'ՑZpH5Ke ~-o%P H HR6IMC\B#=a!\5vp%ҚDLRJM 1Db%')B%FI !NȝAe6,F{e6{?̆K=f ʝke 0OaRI?Hs݆gTk'i`1!h”FF)ceTuD>h蕕њD\C@J9ѯ0 Ip**Z-wCi ƒFt>jE p"$JO5/Mg5t@_'9rn̙ Vڴ [8EN0:Qdl[$HU5gIB1O/G%@-QAb '0ZA0!|L8كǀ$Չ]=e̪gWK'Y?t‹O6\?BvAFc[— .KyncA-}T]q5vxFUo&rnzͻxr=0wòJeCmf wPabE˽NeAҌpjIP+EmR+8(!@*AT S87?3܍M7M U44"8ϷuX4XA}ЧJ"5ol"BirNR`P72 #cFڨbs9GjOi3{Qeh}=UɆ(7:I4U9Ax(m,/5Ϧaibnk\A1:ٷ,\BQy0OojÓվݑ&u]n^Tw-H#ϿEvCMF:8b- ˉ7HX2DZxmI .*);5zbC@┡!vӐg"UQ*C %clJ1YX2,T,|RYQ^*#:?vW,o~`Fh9Yb#@M2" 8y.3IiJV5M\R,*I6dezn3βJ%P#&&lV Ŧ]bVTCŨc_ KmKnxqTi/M^Ĩ?,=uRƸuI*7 Ԑq5$(Y,_<$ DJ,6V+`<X>D4D%Fk#Y^c3Rߪ/썍M|89WYkIʱ?3Û$R4Hi X8ꪮ's611Eʓ8 3I+! yvM JmApjTR3.8EDZ1W ŭ£v Uk.U;ò\tU.b'q|r@eX3)hn!R2E}xL;y'!7EwmCz ska pF.xV+7x76#Wa5pAp]#E?RŤZsgF]|IxRKl*EUii֋='6{NlD#騭 @seo=>1&@BΕV 4]ca?1!Fn,?Zu昽^t)KIfпLyn"'=E2OY[\Kl]?.˻ ot|5e-{wlyVN}Aܼ=rE"֕x3`)^0% 8.DxPOɬ`1SRD[,R[i#r]*8 Ynq!-U)  dc)#kn b xC||$rvpZB&23aɄAU_pOAʦýlRI(.)[r $-Ȇr>Mqݑg"X0Z鈷~ý2FGx!D|rFnfS].(_=䖛N<;gܠv#73 ,m2bժe6rh}_2mJ ;σ:J479y ?9"fE>_2: ޏ\p}y~]/{R9G/*$HYpѣ,Yu͑31왝<?훈'᳛vhtޤZܶ7y,?o)vOJ64n:]Q⮨;oR0~QE]? 3woH<"+U4@n,. ِ@<(=v{W/Yu v?㋘#TY+5"y<q 2p6e"S:pL rGTݽ̓r]g3mp}(%&2 54rgtJh%0JhS\_sz-ou9X_럻Y-X@f~^=o[l '|g>euU/{ чH =N8&լ2SKZ,aay5TU^Uub몪J7m'*2rMԀ#\RɰEUU9`ǜHp*)c2$ψ8'¿LIExj{9{/PBszjһ1rPz2߮ f?"G&mq~dj9l#S)dxɘf1W V.+w_ΈJ43D1[0 |ovC1Wϣ|J/ 'b4>oIPɜbtrZMT^ m/ɡqE/"gUG;>5ɊR+(GӴsyixyt>&|o@M\ [Ɩel-L%% -9E*ֈLqJ*Sɺg)`3?BM4|LF&n!ѯF1v+U1)־*Ns~Q;f]\ (NyGfyys%V/oVjߝ3[}=8WeϢtƽ5^UR]iqv7nvPbt}h#;_tbx]K}>~?f6^iFYUovvyEJoKζFU -ZVm4S՞f3 J9"è-aENvx sp&>7[z4dbN>0<2Ψ/%Z :b.=-%Gvqa ]&:Ne_7;qQlDOtOA!ʴ]7lC~Y|d"xr#hDLt2`i,dt(IK-y|STTb8!.J-vw'\h3 oǞdWgkK7h/P1L)ҪdD ޅ)i7f.as¨D˳y0:nPPˀۍs2cI'o E-d<0=HaJ/&ނ2_oD.O}m 4JAQbw(E?B'PZnSTR$c*I=S22wP:4V$mP! hq#D?%$kXC q5WlN͛u.؜QǓ'*I-wq/JVv(!l$/bV<!͉ewT@Rz.o1ʻFZTb;2AϬ> a= YNq'̕`7'.Y)[WŏqT3ɑʵ1Nsھ}{o^]J.ծ'ׁ{y⷟m/&+M/ApJtg\Zd񡨣.c/OP`|aT! rvSwppl7y6WKJ%G'$: >P(o$+jK\Ԥ%eڝIԷ!&b)^"Uk|o#wv1zot:+!_&SxN|ըnR}(Դ.9-~}l>8U̮@E1gI/ז\~QUow Y4# yEð73,qӆW W0b>BcMN2|$h10'?*Tb~*أٹzZ.T"O^wvҫ#w(^z:|_o{s#޿y'wsE$qo~]Lנ2sNDWK狯/{ɰk;쟝GUkNө{@i]W~dIRoX,{,˵z͵XahJۜu mNy:.W3ϟ:h}D,*fISV Rh^&|H\ĵPÏIYBHL{B%IzO9Fzˆp)&^8<@94Fw`3TD5$P>ړAO1HbPȰAKP{tZ 컰3s d[Yp_>e;v^Jvj DlT'הD9D ge;QIx+7(½K$' 6ɹ]O*8Զ]\ٰĆթ"W#^L8uه.8c,ñӎI;ҜrEFH0'[lmI6BjlTjFdldIiۘC{tOltšL-Ldfy|;2?;#U^>[lLmRW9۹"DZ-УYȊ$CP\-dFF]5(j&=-"PQKJ"gjMpdlUfXhB;“‡kVSZ}YUJŏEZI?Wtq:5bGl)h>6 W&m3#e1NR%7|*ًFTZհK9v‡ d\Drqn6g7E1YkkGⅳ"J#dЄP$_h,5CH^K6c]Lf|*<ޖ"YU@HŐ`=Uxl:WnLk#oLjSIX[b!!* f{Q2)ZW=Y SR0mW;BX̍-CNj&8CJsjoLJMpD*Fn\CZlf%q\ c\>/!^ A]j[UN&"4:()Pa3xe㣗_VGog+ Ȏd?ȏ|}[Glgͷ?J_q=׃b<9U/[0o?#ɴK1o(#??KL ĩ0OCD]N"::Fc..0 s\4\],N[xyfIei:,.v3Ҝ?@ yslνr.Ϭ쾮\ٺ/@Q1Z)JS0A'Ϻ쉲p ,1؄M ("HNM!]@2r^vrky3)xth5 ogI`TҶdJL&̖fLP6O}⛯^x{WCDB`ˀKp3eT:GcˉsbT!2~>IVYXvBqNƜR48UYЇىQk-eec؀̉eaOͦs$2O($ փINc!Yf$7l@ڹa0oY(mBhBn6X7¦,q%&)1z/`-|gŐt9@ÚjiM}LEMɞc =RdW>0k.2bA¨Md}l@>6ϩ m }p_3Hh_ڪ{3-ziO͸ ۻ"?lazcZd58!)uQ`U' \0~E$"cxkEfN2u$.XD2FAYI|$"ĸHd.Y9Vq6|5K{݂-=Y WnK.vn<ϹO=A:]EEօY3*J8g7[$zWʕn 8ZJʩ}9;(;d:gU\J\J\Y y\zI: |JBJ19Y_frW9MȓGhJO~8W~RNpoۿqerb;wC Kv]2 F(E10 ^czSSRk$%I9SIC1 LjpPmioH*P@@ N r&pHKMRQ1 mۮ |\HGz`$P!]zTk#kϨJeO6 hmIg 0 hxHdQ{m`&R_cixf+COo#G<ŐQ19cK EIggUJ(OdH#iy*\iZIws =8.9޽+ [UJ%rl2WgcgV/,@ 0px+E7&}κ -z|X8T0thR/EOp#/y$A:/֚ ED|c$tYR*$/pXT+cy땷9 SF aD*H!I9b.'~P {!sy1;۫(?6y6:O!gןa.Vgۿry5ibrSf<_\$H qQb f')>ҪcP?s->y7Z_}{y'8DW3.g[ڞ}sX[*t#e;wk<>,Stzv=Q¼ZyYS\]}iv&Oҁ.g[`ct#ŽG8ADyzdں_5u.: q^FQc/sIW+uܸ,oUѫ5ͼm=Xc./5qQ~RYؽlw'e*dXb8J&_u.Pfrra)ܶGl=q_n=[xJ tT#v< qλ+3Ywua3ċmrahB0vd}62yl`1nCeȨ&Tx'Y*2&Rd, YfP`#uǕ& ҋe޸yhoxN2<Լ2sBGPTh% Sr@Țp&UByZ?Wj\v|_KΒ8zx:[澙^8ƸpzBH@ykC]tlMt%|np-:UgE:sd:(_NyQgߴyx:UAt& 3S*FruSq̇U %$v͚b{pe%IOn4.qrC:4:sָTP k-鰵vw2|T,kKתҮ5>ȡ2P1Ԭjٖ(3^j]Bmhheo-5=^[>lբwb)oxRy9!ٯBzD r(\uʑzFsGskwΡr:trO@o-]?vu)ĸmQ6Zm<%#ʰצmgmwtO&] J6\g)g7(c }&f%LM9gPNQ-{Ե n,^_m# =C5[q"^,aosE]Ǖ5_{'&ҷ海{Oi.k7 WۻY!/{Շ;>bɴ<65 u7213q˚?nfMyV;:s~HԻ?QSRkPӶL}I[+c־l]ᓇ74?~_YhQ_@p%jG eoFUo8JmH*)5-D]tVddxdJG]?)8U-фzsunP~B.ڬ޴A5c}Tck2좺?ލѺ5k6{Gb:KHڹ n0MQr9Q?-6ڔ{h"7oT.AT[1ևͬUp Y-`RԃJ.wFJH-tL' ft0kd38ƖjhiW]U=QKʇy>";2Z#]r᭾mu HhQJ6h ٜIYg,4d2Rjl!cQE61b5]gOU}pzĨt # #B`J&/޾ε*6,tT:YUkRc).U- OB.}?nNyTUT/yr}" Sfj 7.u>"ZcNqN \@t`VdDz+thobLEsR %pa΋ohr!_R!Y[֩NRH6YPSW HC|A)AI"kyJ3dV %׽6(`#X($}AIHHH(k 6Zru{P(&C@X{D:3@4g<nU+j*p ɂb̀R2pBfcmȝv՘xj-2^_/+ַQȎ]@10CP\%yd6@jmlhLsyLΣ&`)@|Y. Ao%JP$JDMkT2ȇ`],Ju}4/|OxHPBhb2(kvk+7< o 1tmBՅY,TG7ChzBbܑҶCMZtYI; >I wK5"d*S82>b=ѠedA!DmU@@PP|T@ 0AoA,=kr[Sցh[Ru ByAz 9@b8c)z^HZFq̓AwFHl#ϱ0,A  3-;kʠ P65>:BCx("4hތ%X(ga1=TV頻FB ȉ([ O0o V -i P h2i(u5Ձ{%AE;&LoVA I#u ~>f\%\C` Q델!hʾL{<_:Ex}z7WYd@@) 26{ 4YS@Pc2m໳Pfkh\ ?k1i3j8%kn dL |ӽA gid<A O|DPK-P͡*xyH uP VT*q@:t35TivvDPKP}V=h+'@+Ek׺)tr7pO 9 %I*# O(aYT1tGQ,FRu#x z@U hUH?xmLmT19cR=iOڊ[fjfz`kFQ9tvd2b91=Sr-Fw*6LL>ި \K}4E&n JVP4kMy!6H ΀z122~.6Om7a"IIzV9`H~֜-jUetiL"r9RQ Y 6IԊ7A`e. hV3F BBք 9OU>Y?];ע@|J[ ^qz^P@ZV a]ZGʹ5{zZL댚|j+@Z.Kfm6{|@;%9PZ(8>g)N wAܓ'st%'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q:ݢb 'Pgڋq%sN D8;9H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 |@ g܂@088ܰ'GН@'stYъH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 |@A('dqdh9N i߽&Ng_'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q=#'Я{uSauꄗ]oηWG@< ]q Zqmo\N)1.= ] 8\!b~)pڃ?q$9q9ySZ\/_+h)pцgW1Wqo=)9%:<٥򚗰 g_Chm핢d_B}^ի  3/uDŠ}+w0滋כUt_VNތe7>qϼۯhͩw~/TJrdr{<`_v28,ptp> ox!'XB t\pp]g+֐W%gWQ"?Cg?$خ F뻋ӳ72Mm免,tݫ㿾h?^~u#G_`z%WZN:!tu|iEGͣM̔*ܡG7yCn5w|W?{s|IPze M xCW|;9l2vhymοxn{Wo/ CmhG0*b!HZ/qO jnKj+TZ3jlcRKzƛpb&5ኣtV곁xϮ/T$OϮO Ճ â4*>c^dtX\q!,8DK+Dk>t(zpe!mWpP+7:\q6w%p$pe;4Wp+G*3+ǛN-HK+¡ԇvՓ:%M#`aW.-fYa_m/!`g0b=mȒcɓx翟bw%jI[v{bY$ů*{ePS&^âj9Dpzv 8W떾yqΧbV§+8A- zJg dlˎ/Fz!R:DǾر5?\<#IjH9pVTD~.ObYvRoc9yANc!$=cJ  i DNTKor@YfV |\Զ8IQKYVNbaE09 6gd|WPJyhB1pxz2o\,+{eÆ*86\pJPX `eF],-*ZFᲦ22%Svɽ?MB7qw /W-l-tC>73r=dL&`Mr5i˷uE9#v_M0Z]Yםͻz\NOΨD RNG$2e94[mu^цR4տ\1k}ar2_y?0a~k޳2Ch^Xܦ7d;yǻ[vjt8`қNYym._⢩tqyuZjsw^׫jCMY|$nbVeh7w}cP_޿WBX6٩ ,ئ|=ݙNLX__ߍ9mQ+8w_2Բ ZfU9_mN]Jءz;. =ݒ%}6, [9jg##]ui˴ܩTtۚc.zjs_<ԔXb>JL ͉`A\+)ر^: KHu4=Zb݇q!)y@#^AWGN$L>g!9Qg?` \[> I&&' Q"C$4WTtќ4(xgUBa!uJPMݒw!՜X VPh!Yl6"qmNB 1JZW^HaRx:_l_FRt)M3ޅOY/]u6,F5v,I6)!)oՔbl,&fFntgj;N.5]AKťёObDćQux56t$N"drNyHy8q9\|&-EP]B; 8}R l$kjW[qX;U}*UECvtJ!R(/˿J)tv׏߸++Ƥ9]L'aR]S'\!s˦k >ʀKU5Nңx6៥͏77u`-MUs龎i2>ƖخGF,ħo/lmI-]5ڛpnlfYᢍ0 G0bgr6ǗŶi*#[]vնFĪlŇ}0.,n0Qf>Ԩ-*TQ_] ;~7?GǷ}ݏop3n 'm$Xd~ݟ kWcP9rh90in<3||vvy5M+y8Bi=˯ KhhZ:pX Pj[6͍أNW`ve]nire⫲bȑv|Pe|6mL醵no}~U?&#ѣH5vu5Ke >7vŘL9s$e4et66,KQhpxTV煏ԩZ;Ǎ4ᣱA}"qZ0Qy "$jT7c3rrO @Q3}w9 ,X*ܯVfIP'x 'ewNZulW޷ص9 )s~]J?$szIOI[O2/fZ]5;ށևvݲNs?9:Tzv:Ldnܟ"욥@0e9 X.o_PG(M'o r{D6ci:2mEg7>^YB/¶μ˒ɻIsL7 fW&jZF[^RiLjUXnR/' ٹڸR1墍6訩0ԑ~e BYn^e&Gq^,WVq5')%BSgA@*xB5us/:idbo^P%8-(K$F)R<)9{ m=RnP؅frzb<y4WF9'idQnd #cFZPL1ṋԞg2C$Fuk,Dbl ϒOׁE >넮]{ui#EX ׅGu۾Eu'E!*]E5(Gm8ޢh7}Fܑ'i4~ݎld͕mx( X9<38 ( !YeA)iKIpOrr/38P{|t!1 šk-yFn ,Ƚb1i<EBޏVYwYwƃMn"Q$ TVSߨAՎ{N(腛# '9A@"ISB QrsG)vsS޸=^Y~^Z֙kޅ;̾"r+i U\ Wkd'H)cXes*z|GM _~hbK.{WmxԼ*:65׳ٖo\y1+QqtpYkI#zKӆ5_7\jVe5]RKm_F77Jj^Q>C!}HD+Jܗ]:)q,#2kXEŘj_%TE[r0-ǴHټ5`ۅƴ9u>./ yTJpB&S#ģ\0(9 ~S 8}t6j-Mވk^]FZ|n,,邏5s 3-Sdg2PF,7ʼ (*z]fIٹt|l,.RK;WaJ_, O[ZP._7QIT>iWp "Fޮ7 zK; gŗ@Zڵϗve TvP"EKSG)c5)G|\ ĵv;TF8$rNϼa c>"i{*]j sRyoWkLo|1',b)L)GpMT;D,Βш%H*)|?|}V{&J݀A9́PN^ r9,Ѳ.,+m"YU Fu)&)v62Rכq~ x<;lg!Ͻy2~ex7i2ѐ֝I9ϾI r0DBX9xI_8s:ANBV):& ZN&gAHmP-W!F!a1+&0-i͙Brcٻ6$UgCFCq:NlY,p!"XYw!EJ"%]SU]ꪈ%0li5rvaN&_k(CJs>{QPԅ8LŒ_1#t&?T[- .O%5s-έw<lҰEX͕>v,JR&v,CZI_Nh>^܏$IՉNt}`hi,J kA`9j#*jnIo O2R  R%P`)a,Kz1 %Q*u.PN3E6saFJ6&Q|Pau `Au?E4 & \ί+^;,nvTC5]zqNӶV 0E_n);KRQ曬8L&*"ԥGUһ)3I%ݸ$KIl1fe9 (&&Gۨ&?7fYSz-96Q6Y΅CvzXyWSR/j &q!)WV^e! {1r@ }i 9wN#RM[mg&Mlp`|Hv='lX*4^m6})nuY8"EV: /N'+稣,(rcb;xvP S _b@6*E-u 1%f iJ_ L|,}8PFȭ\GYo; pkWzU%6(=n$-.0LQ\4_%SMbs'!f UTRQh.!Ĵv3|zUOy^sbKs5"!΂diA\p)/xLv4O2׍&ߺ* ZRg/^؂mYE5kS0 31<褾5?$ 2цᐌzQ6؇ iAdKRJ< pn /<^cJƛr{Oy-hYM8s_&4SV̾BY/b/U)(ں$!~oraxzE/|btA-)eKh%(3(0j6 qwʠ4?eg2QWpuCmMhEQ/L@M4ȞaT9+]Y*&n2DgB̰a~TҤ٥=v1 ꏗF˷6PX++-6~MHo R*/4­WWQ7uNfoTu11PMc;7Cq~)!ft'$n#cǸhM ҋXbiJM4e gCbJ={`cM.W5.5yn_ү⺘ _{?Iur4{9nYPTCF}L 0jQ L r; 5,Pa^~3h7޺Ǵ"Aԡ^3]2 c۞(l:x}~[@){L;$Kۺq|ahd07bnགd. rkuCh0oZ.쬾Ǹӯ}[9OB&4*1*YyhBVznH Z*NcA%c&PfTetۨQ1,%! jRL9Kv+x< 'p7^|}y kpE, Y*Lz\VuK>`ɴ`*1?f-`K"26\zs837cJ"D"[[.ǩ"аV+'l-cK{ӍF989B1ꍵ&Pk+ǎ`#F ewynV/:W=)ĉ>#2+,"W 9.#Jîl1+@vGm]PeRV+/(U+f[-v}&쪹ļu:$WͫdT!NZh׈a#p`f?'O>d%}{,kx %(> Gg-ۤ!\h*JErә(jHԅ.GwHtC]σ^J6E%Ia>a!AaF62H$( 51Ȁ13Pn؊a%^`ֶ=?")9HX;ON1)JH HpPǔVЬ>5cgaTxښ6$Ad9J=8UD Q EY` vZl߼%vԮ ms%K4aPIቱSj\& \iim Ԗ皨D7!OhΣ\3VByO,xt~ؙah5RN,\6+,*xן0$aN~Xf݀7yڏs+QL ,+K;Wا ;OPY#cXHD0 :KA)Eb 8 u^? OEpb K K%7kX;-=i0Uħ86> Yzz:R-f[_NNzIo#7ʂҁбۨ 91|1i3>ƌj3OʜYƥ~r? ]܌9MNQ Z?|M6UnZ$~H|7/cFl\nJTOl0gn09v>_?ww[q+0 6&4ۙ_7'7kP\(ri)t;(\:ߙ+}e>4H[q+Y}P.#òYH547*9 69q=.K.|B iʃF/o)m|ZHx5<$RFHKCy&J(ƴNZ^\b0HOl+p)7I#D*b#)HcPHuqgQ$aJpsݙN+g:z5*?dCyʍ<&By1va;!)d/e9W׵9b q.~`Jк=ܶ"/y<[XYlnͼN2G7c#:?^5MtTTCVdj/̋ 1_NNA:e <^& `GzCι`ٻ6vWlX4nm+[oINyЀ&eR)s C{t)it@a~?a94H_hHɰqFAd9^6ݏtjuxuXϳ.nL[]oWݮuGk=A'>=Rgx|kv-7ivʼ^MW/; Y lJp6M]aOZ񇫫o$8G}Zt4Ez!| BD=l]ota('y]n!iR9Y6˅:`ml[ޭtd X}xx2^i]ps֔ 9kb@ 1eGt|Y}eȼVgc|`&S8AK #Q.d _Ì]^8F \Z69yRwvJZ/ K%.) (?s8}L*rd:,%i_(!sͽӟyR#(ԱM`|0gHΕe9bIl`>1mbI`lN5dS6Ȩ``Vy,fAO f0 '97 5s~Xm8W$X,ԕPXxR,|Qy/w$^^lc>ɼ8ݓld4{g;#6LF *PZDWD$KH)Γӆdx&FSUĖVfi(ƞ/BRؔD AduYKt)hleĮ6ÈfS01դX6Qی=2}όf+* s9hHS&^p4kB:C]d|Նs[~]P|2"GDk#"Ӻ$ \uRdmG9Q,eGQ)qrtYΘ6@qR pIgh!X$/Ho8Db,3TaD|MԁpqѬkHjRr,.ʸF\qq*"΋D^LA25FBb,eDZ) <OCjq$nF|C~d@Ўo#{kKކQ, R InCQN eH)CLܔ>cf9N=o]!5D<_#үX^tYs8m38/3uxk齝[ O~v%̠"aCԤB8m%7ΊE C6*@.|,Q{IRMP +2ovѾ9}c҈l֖*|zX GP&W7Cn 2\**E0i syw(yK%(gS*WFk4J'&霉eZI笑G_r_0tI[9M0U!S:v \o}MncޙcϚ2g)N:)e4 tV4\H\I%kh &F7f{4UqadiJv$JNYmNI&5W:3 ԂZM. u,Q:eB)=2p!knlY`Gm̵݁ՆsHl!O_ע)JpJ%6)y(&gFw⬜&MzfhhlcMs˨7"P0 z <1a@v-"d)"dtJiJJ'rTu~s%C!I}C{6{.ܿzg^wh.>0'k SZ_Z7.tf2 gZ6r}_|UEG-|6~3^ ˶sѧOӽyx궶\?^W~i"ߌYh6hsA5iV].f5i/X㛗WBTV)=3C"f0ѐAР:4Lo`Jbѵx"?Ypx>5οoRD}l~;m ]/Si#*CʒۀgI:s RI-2>3.T- -l+Y%mۢ\pN~'j1:ɩ>tBkى T9iSesHXNj .z5x7\8\Naặe=.gӬ~V=M&iެ:Y?\ԗ28 4&diɺa 2Y& ؐc oߔJ,c>Ϲfq0.J :,zΉ[thIf,bb&|!Ht U"=pFGeLZx$Z ^9pz|Yr]:^/Qν2;fS~ǮwxކŞ7s׽pHT JȌ`d$3 dJ)"Ds;뺩&%TV(cOx2QU<)qR%I(g6y&Mک =5e8+Uš9l4i E^J2R0R[E$^.\"t 5v1Tj=Zbb:&敱!r}ܐ+Ləb3}f8uP5ʣfmc~qq%Lk6GJ:t'j<1X8q*QcK^2\~g#Ŷ?p3htK^4-LFTQӍV6Yha~ ~${ȺwK k8 Yy"di4G#"`$Ȃyj>:r5 )>G͂$q=A ۋ{?~&`vo#9 ['IktkE39ؔBw Q5F4FsSr吓9bq?/fEFm^J ٜƘTY%86cy5H)FpVXmdagΛOM'8֏]؟|z{ ?]ތ`}lz)!ʀ0'PLXȂ2D2.&O pN٨^ #",0~``![̀cf]g8 c&%.KZn JK5k #Fuw bGM#vjq8KJ ާ21P2 4N( U4Vjb!()-J)w:3?=3"φr\tdE漈4ˑ-l9(Z8R:S躏"7@ h8ό"8[d(a14$ܑV'%i5( a\xF3 WFQH[eݦ:ۗ  ԛHQm.:dAVK5@(]ЅUӟ1"<#{Igp)2eSUŋA3koZy).BrI*b1'!ȃd3LZiB *Ƿ;o~<_h4=y/lWy>tY|(iq?ӟ$od໾S/eT¢C\gfo%CNb0uP^SW?K(󥫿<mһѮQs341)N91Ayn2!:i'LYvS2{>٧5~t3 Uuj{\#%]^ȮIDWY%nJ,ΥBՋ{O/:?t{t+> K1^Q>մl,NQt7o}I{%ْ޺7V0{o=D=wFmz,[b)&bH#:m_)<$_ 6m~M[rmmd[Tc[MpysHnuWn;e} HU6?:|~I9AgebUWz>040V*Z(bwFA[kemjix#z^Qckz.QFYӔ|@n\{%c~}}zW' /JG!6p<BEՆ9bC\\KeZMh}͇_ֻ"]/ru O@?uUZMbuV{>W]X`~֋njnznZ'9 dqq'}  ]?\&=_F^ľi풮)Cd݌]g=o]069X븍^VXegGzƊJ):ק՜OzﺃX sv7:(ֳպOү՗\c;<<:z8ަ,/O/V=(buӿ..ϯQ)|^;3=Uo銀btŸVKӺ2jg+q?]1m0(Q u@+btŸ>HFH>dJi K|:nn,rc=)<@sLWα/ϗ/ruhCg*gLA)I{ko>r(Ihr4͸K4:ɚfLrytyYu^\MqRM?Sq|}1/IyW'wgi{yHg~I3omהZ eihm.4 0Ebk'>vkɔyyϠ=u՟S{q{^n>xu/_a\W:XT?Sɫ~b8+mv)Rp7CR-}_sZK9u1K]+jG]kSS ov~}Nʡ.ク󣖾Yrgod֬π6̋8_4kr,ŋYfel}|{v"q}zݺu/Yln֋)m pTFLWq5H1ԻjLrzjdM ]p9Iki;vw]1FͺfgxqŒd yYy~͒'<58y>V붞 US;BeuK0`F?wk]a'p- ~nў6枴~7 iYݼ:6!Ur4t֛pX)"Z u]eȺ4*o ]i FWkcO]W>rj2 ]1btŸIFL]WDAe]PWW tE]1 RtŴhSS:̺tƮ7]ǮRu4bT#S mx/cf9tRlu<>6ǙsQ}[hTu;EtYkk;]'o\Ԗ/U\4]]Ӏb6?nW}!_ys0!`Tˤ_\Sbꪚu֨f=i.ږC%;6rr%?J%rA^׬qƘjj覱lS^,7T1 =5{u-㼵K`S[8t9j+߆8N{')#`Hqm4>H)7Dȑx"9^QZK$ࠜ]nb$X(mbӅFW+;jBj|(IScU&IYG@1"\|N"Zx> z)|8t0WDOΙOW( UA9ptEm5u]1kGE۬J ^ kqQcbʘ{Wcԕ < HWΊ+EWDڦ'ɺlT ]p+5J]1YW#FC FT+N!u]%κvSU25K>P7!+7GS :Iy'> 2zc$9PLqGLDϔ!9c7L+ 'qiJG#jdJF+);ٟ8]1nRte1> )mnX  ]WbA5ɏ]1cWcRS DfU8#rebף܎nИ-f4ΛƝ-}$F8,"FLô.Q1WFKz=hD1bhhN>a5ɺz]Ӯ"`m]1.z)b32Q*l>?6Hq{^Phʍ@CF@ ͯm}\s%5JX^nrXŻIӣ.o۟\5Wľ[_9y7\t_nO;B5Eȧ` t\\7Yr^q갃bQNA&&b8< .E f败k%wݜ:KeꥮM~[쩩pv߷G;uG>)%]< jOY%ōg@Wqc 8yW43Dx/ R?Ŭx2u6]=[l^l^k.}]i mQe ֋)-E H#`m㢘*LC]5>w՞MWnYvƋPu8^GkZ%&YYWz+A"`=c~KbbJu5F]@Ab`q#HjH^WLibue"8t1z@)bZ )SK/Ite )d`A+ NlD]1YW#ƀJ {#FW+'$Ghd]QWN@:_,*xp;GG;Nō\wݲ[fY>6<}P.0=S֫:7Zōrقl{-y궈zNN$ǸAL$y#94*Gr#h ]1zA9]1-uE6j*D-) !+5J}"Jȏp˪e z~v~Qbb¬V=BAb`'GW]1m (Flxt!z#wE ٗ)bڐ2ɺ{A"਍]1RtŴRS:u5B]YA ]1+ (EWD{zbJǮƨ+^PA"`Qwe2u5B]9\Π( kf&z/Uw 6F?&`M3nhN>fJ1R5[N:At7I')}=AObk'e?JcDAb`pgZ 2Sb̺ VDfhQuŔܖUfʢI šSᆁEhM~ rYWz ]1WbtŸCoދ6hL]WLi]uQׂtEZ1b\iI]WDiR[%Ite ]1pӻ"\ bA5.u]1%bNR9bhP t!jBgRXbtŸHц6hGG+ۘ+@xvV}Kz)M!zˀ2=+ 9=]Z/ZLzw^-6݅2GFqrri1M~)}'c$ys'HWl01ODkxbRҳDW` ]p4(FWd!0mԩ*xr(u}PR q]1KwŔ!?]-OU?`7ᆁS{VOGi[BЕϺzhC@ tE^{1"P6AKJg]=t._zGp)"Z;Ԏ(qcDM+ƨp=j)"8Ԏ}(1'5(*1F1bܡ7hG~0Hsƣ+J`?`/'$ܠiMbJ YWc՛[U]0˳ة1ꘗ/G\b2jüupea}`nۻw֍tͽ9nOQ{NE{uݠVow¯Qҹ쐮7%<>^岬ۛBgJ۳Txz\Sc>;M']Ѯ1*NhVyVGˏ◃m7#JOcxK䍀/ڇF^Wn)K&ٖTfa˩Ȉ'T۷{koSzq^+'vxJY=!E*^m5߼|G7n~|F1݁m~BX&y&iY>!6?84Nyb^}"~!Nqzomo@uQW|xKb]ؓjTQke|z]E%iS E2i+5(mQ~럼ŷ>gH#?.>@{Yt~|Y˛a`VeREp)DUq^[J6$MPd1IKrAHE IPƬE))VMERsؘ'X;駂-8H#ŧԐrk 9$#ۚbI$ Ւq̑5BysuZ|2ZJ:J(5[t &YDz&r^_-!r5crIJ[׌R8* %e`Xh!&ZK}@.Xb26D6Cc6m@ltʪHR,-ug@'%G}{Z&Y!)!d*-h4]Ф4IdPQɔ #P9{ 8} 2:ּk94Y5Y`^Eg@' (&@O$xs\\{;ThiV:uH )Smy>q.pޜ {UeDM*1\S#QJr-^4kAr-;:j(|4 э %r'X5E_G8:BӏR| -86`43'XaS8Q+Q49KBL }T[2IeSq%B`QOJ=k_:gÏ!)фŠmZlڕ@ƨ%gsEZ,` 0kdM(tGlUboTtdC`JR/~ E2US`!L9'Xr E@ #ح(Pd8gCshPN+oݬiD)R@!WMP70t,jPJT٫;#*)@VL%g\V3!Ir ӧa|쾿/WggywۏΣO*q>e[̽J~6B@Fx6= R>Ŭs̆hT`1wa}|PGt3 s $$B? jH]$\h B;ڱ -<  &vRm>Y5h+ZMV7$t-̃rzDIȒ,R\sMEQHYIbD"~4"$?<B=fw7Y fZ,&ga'DAV|^:OXx״ղpI4E ԍCiFh%LCGs icJ`ӿ"'6UQƬwZΣDjA!%b6f('gL o9-Pp0(1C^`$6%T1eq#BkwE.c5(wѩz0˄rH'+RnC`꛷at6ـ '+ MJ*UJ9*Hj'9B}{?I^2`Y0faO1o(cQƌRSd0Ga$dj΢0ts\:'X+~x.]ڈrwz0c@sjg6jQIдR@Xv=P(-Ag/UߙT'#)ɐ+&-LB欭Ob~]}~sP|P?j_b JHNQ kIO.Z`EDOZ- Lǯ|~n}o>bv v*KhaI;dzެV?C:zaqZrsy|X\fXz%%}GW=o+\]mbis߿Έpиus ^c]0} o)q'!ꐬiVC95 hy4 i/њilMck[ؚ45ilMck[ؚ45ilMck[ؚ45ilMck[ؚ45ilMck[ؚ45ilMck[ִؚkM B }HN?':'Pw zNG'v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b';^A߿ީ˾մom8\kYoyu<$Dp@%6qPK@k峿QG ^q郥ǐ!q ܡUG{gUG]ez9t6DDWZ;]u{gNWmxtN-CWK1;apz`0]=cqhc(]GЕgzK,G ]u ]u?w(cztշE;y@tNۃC֋NWe`u J|tá+%IBWϝ:JCLW/s pBW2ϝ:ʽӗLW/9uSjٓ@$w5_mVǯ}ή51]nm^XDV7WgYhQKz\? yw_=m<[Oznz<#9F%ѣ"%dοҫcLJ*9jo1nJjͶ߃eys/m$}P ^'Þp2|&Z޵>m$/r]lm.Y_+5OkdJ J ;&3=3=#̖+{7@y4RvMԵzIͯB6l*F)$oyircXX,I,94v3L/<<ʿߤDYZ›'x[o@i߳A3};8Le8Q\fRn>KL)#le7~ pf:,k=IX񕆭:%C\T/󯎇C!8N8o iyѩ\uͬ"|n~k9;KAt^JC郷_ϓwWYV1thF?]`[]jvTح%ަ~||3=:9%yvӖp='}/pEv7_AP*Z._ʚ-jU{ɪ&[ V]ZF `rP]Jmf}+" OA"!8ǴiģNU2]Ә31K] R:[>Ϭq4koT=(4OI65# !^Vs>byr2*R/ɉtlw>h])4GH1>Y2nz R#+> >]8/_{5h2O2uV /Ei(`rPe8]Y1X$vObOfJo8MXf58 ٻd:MeGٿҵkEmMhEQ~9iդA6$[pe~Ty`5vm*}JYܚQ8}RMI_W7MWb2t9ijf~ke7ŃTF-"!x"g?e 1 g.]_E9%v(UX~qa1x1HcK 6nуhn|Ww=J7)h hL͝ #hz5AFؽ(D7[j\v6a=UT q/-R,ψƒ ]%p)8ns{ҥ*>c:u_Bulް@m]roq q)TMPGt 1p1h5jcn݁Eկ\-IyvW׏:RwNt=:bas)-8;*7Toyqd,LՑIo FݽCq6Ǜm 6_ Ls;~8۲@u J݄1aE&(H7#i@zGNDDҧ }/[Xnqx9~TN^;NKuĨ(-:XL$@c£”ǽ`=ƨ9]xa9zЃgx@tvkrEJ+꺪jt;up@8ZR Qc &eǠ-&`tC1D#9)^^@/"ojna{㍵[X9 z ou`x]52E#yuhVO'~"+=V[)lyYJՖ|uήW:j=PEU,FǾ\\w6t*Ed67z2ί-!Em,8>9)=BQ}Jy񙦼/.>+v ϼ0Ze=e|A!S#aai(6DlTDc|S<*6VhkssS8\E٥+ !dQFT%F!K5MŠZϭ7QX0҂VJӘG!tXhm 4wFm6*GT KIwt!ӳFn ވ^&ylzK4('y,KR'>~td $^Tb./>~`[: 33sc0$B$R8ٖDZ-a/Ѷٖ{ؑl˽h߶d!̼w8 /`Qo5!EXOӡ?IxD&h* ?ubggB9#f)疳2`k U cJ#68R$#jHysQ b=KПF~=48Z}DnJ&P]g;*|&TB Vr8X^,^^_<ysC:<NcʠfNj5`2 yUC5Wfr]F+ zA+Աu2۱y;F6NPw Fq:#8kE,\ 9Zr;!01`H}ȫ1l:2Cs9'ػ_RU ȏm]`wMlڕ]rVOĮR?GYwv`KQN)93Zc>aL-clfW v.5q 2О< TtU{ >i.m/e]v`܎VR.wOGtZ.:ay$A CMSXj`rB pQ=ʮ"DJ*+g2J/ϳR =.P_moj|a`NG!tle=](Jm=mZ=G}9mMd"{ՈcI|"\woxgx 6 v^4a^-a,y%TE?P c&B`V8%v\'-pYJ\ZEУ=KEO"B0Xor]Ed%,"kY 2+S~KL[dB8~ n2+KT(~?ߜgU7g6_M&ơK5F~ד٧8vX|x*,aJo}\jKèdq-nVR&z$68P.,<:ξ=4Cjk0 ́g>MםV -W\ҥ)%mSռ.VJ2gVu~Rޘ,XliV]f6.*O  kfq~Ϡ=Uyܘ},$,y^7?q W٨0co'valbɛq:%C Qґ .ó|{8/ϫ=0ѸMyaV)X|BWYaXVRSJE?L=y ɚ6țX; BOc݃QN,oKLW졐t8 B>f.D W͸T 5;\dV~:  SLV19[k~-,,sj͟BsTyn>$q]q6vпÇy \9x~挶ݸC}S{|ƴ{; *i7`$XTX&@Y2T#IQ;B1Z4SDƳDd(4R/ng8˻{@M`(Fs`@hnm%DN\Pӑ*4^ Uh/"P(4 rl`G `/,20 D=lRLWe11waʽ3ø)a[嗔;. Tc>'e! pj?uPk,h{uS&9g«Z5« AKB ϼ}(hh`@+㹑?#;lǃt =ST9 %(>kY(7r:j_j> %CÑ+_o~8V%(}zE}mYKju*gp68$(i,60a*DP S)8#lQ 7x~Xwl/ҿ +zGL L*"0 1f`g)L BH&vY-Cۼ`dgw  H'XL<~e-v˒_m^+  TEyvd7jI?U6_eur1׻ɿgvzR>Fw}Ə7y^G UͻqL&?哹53)ͼ_E4w4S=ih_۟C|"<ͻSZc|wh۩SXrj7'>)Z6<'4dv{5񯭉+?~28*0v0`0\Ư288{:=jiopGyg#DWOvd=]vuͪ‹OWp⺋_G=bnr:{Ub}Iv5WZbԎ-_nZ/|rUN8jf_6ݚF.G?:c??A*O?'~Y,xyy4ʸ}Zz?/n_]^W'^Mw.8 -pO&:?oF]koZ>g=m}N߿>WY~K߶n׺wxm/NxI69iPE{tAsky@dDTLc@H8(. QjKGzmcB4 mg:b1Ez[u'(r̯bE9!͕HY Dtz9әo?S =;ʰz> S6Kd=vʝ;&]=vU^vX1@mˈ/;5f2qb8i&=$́Q!Cld0"bE t#ssNΥ: %uNڄ%'@mr iQJaeҠ M0HS*dzL,9[>yrd$cJAEBL7$z gKJVeRxK8JƁvh%)r|ƛ!;I:Ru,heߒnF@&$05lerT$^$钥T]ҭl^ŰsbFvsYQ-Q mk0,j}&fD:_R^Mxc\?ǽj_`rn"YEI,TYdE`l$Ljo޽jFTs5󞊖UOTBR3RLLN:&ľ5co٬*$㹺 ݠ w #s~!.6|Mn'8/7ßPfA NCLڛpE%eȟY)sp*,]ɽjlu1*ًNIiUզv$gT (EMpsgp6kt5y(Z7xֆ6 Z{@$^QZA6+H6.J&XS!:&kJH*RX0C殫2 k( t*_T"&*GaƋao٬[R_n.jTP4bo\{ֈ~ЈF\(Uhj6e鬒c yx9Ka-l sl8*U$JMJйL `w.Y#1N'|q7)y^ =0A/0e0,U 2G6!0r)H$.ޤp}(#ߑGv2͗U??4er̍uv n~ܓQ|agu G?m~s 2nL7&G$b`U'ȗ+X>˛9A3ӏ¸ЅPE֗ޗd@W(}ȅۧZx',Q"$ɄZĀgp6CG։7W> _hvv`qjyVާ/~skC}=idY؈/WZ2Wł'b o(x1u ❣vgӆv9޲!wP=D{DZyDT:)t,9!W\)9T~*McHwfyNe)9eFN:ފNjK 3j-8#M N e^ӥ0+ B:C %_X[S@}¬{ߏt ^o]?ǶJEf'f ^x2iY3jnGt=Ǚ\^5$#j%ʼYDDLQ 8nl9T<ʖC~@4I֨`RB;i]Ton5)mDb&QMP b+8c,*;cQ)WP엾O/z = j\VT&)'Mo&9Ⲕ,w(b bzǩ>C5(i%%&*@jggKs@Y(}4ZEHؘr!Z,YDdb2ْf/rPzz$%mVY7}^& ꉿ0%QJǞ !ueXRY&_^ Q@H40Drq/m8[&]튂=V}3崅49:\@c- 9Zk!#1LQIB]x߄oi2*<hKR JJ"8o@Y NdIe1YË ?PyV8"Z /k5WPs fYduzR:JCR0w^;3b=>Y<|Z m60%1F&e挵C-(Ȟ([DPJ{)˚cM cΤFFZ" FИ(y8;.)X) T\vKj< RH_6r#[*2'2+Zg1M3e-jAçLHz,1I:&k2a"|o?(M'}tdFtTTAx{"dzs;˲I_~5%a RiLVIIF*U`:)C~!v^G @L .deDLc/F\&12A@!&'{U =z}7IϡF(m)D&K wD.=[zV^~Y 2W/UoV3Z5ݽUlrֽ1^^j@z^huQ.痿6 hQ)Vc%E@(~p6.M=H|afM:-dU"tMkGEP+$QH X5<2r/xM׉Zk1~M$sSF)BdBr,?€Qǒ!KY̋GdSX^| d2U|0̳<,s䅐yuq8WS,rnbFuRX儼d@oQrށLyuDx+֬X1-p&kn+_ople{mt_ӗf2 Ajdɱl$vbɲ}d$C XưЊ)(&  d.8i0 +HΦ7FE["(D|`exM)Qab@˔ȨZmaEˍpmmshDC(ysrqN !#YpZWoZQ? MXrV+'L evh Qcv< @s( ҇g;cCi=]<=,-]*O,RJDE>+ʭ\*6E#[ߧąHVwCzpFs%2o1mx7o<ˏd޲+;m8Vd;4Nۮv^>~}Dr Fini`Ѽs?O~?ׁx9/IOkk^sL_?] w\&F֍dڑ>aaZ;ufMd:Om.>^.jb|\;VGQ5hWs5/~6AQbT4*(]CQECAyoXͻW肯-V=`I6ՀSrˁƤ Z|aE}kȹ[3UzӅqCuYAT(jsԐ$-sz,/ܾǣ[rngWKSQ{ZMm䐽v*Ī1RUg9wk45s_voP{~;E`ۢ}1LՐ0 ˧D}N5m"1qqWV<,X2tKx"V.*;]tti9%c(۬Oևs>-~|ԲNF{㏇jгF F4F-ys2Ys5FN:JǞ@ O5݃R@!ښ(!<T**lM=r:#Y\6Gk/(Q[ MǢ&gq}շ 6!֌6v ikvaVK6ŮA.N ^I䵌rK!:PCoc(dsA96HQP\D۵ZFΚ7ȏ,^Ay I7~g7&pa;^Cf{x/ev$;zqK('lQ]3hlä];pV X#|lbsDjqCI$dKhQV-&gKmp^ZUe؂fE!JX&RaUM+XZeCꔼվȹM*_Aq憟WpkwK x¼zWEobnqwK;K3(lPK^]T)=x1/M?@^K x EPʿEJf]:$/.M/`_\Xl xr޾"Hl?yJuC 7׆|uh{YNWm(=JWsޡenZzj|wO[|f~8N7wtjVn;:f*"ut'f!5/tw+mm>?Oz<4ls4+݈}>+%_ԋluLI:lE[v{+Y?߈\eܝO|m0Юuy ):vMȨ=ڊ U>ԮNҮfC` DU{I~AjߢV]pd\#{%>R2|-- rQ'm3hM0Ѷ TآɎS>]pvsڽnkeLg2aUnњ{*eŤ`T=u&u,V~D~}MC~ڵ"rIe|8h1SM ̥UwI l|s9z^UL*+oN.GbG}L!2b\JBmj k7rP84kY[ϦQ]?|HXmN Ҭ|M3M!k/LAR{/=$e$ '[f@Wc^EШ 724%10,n0l25KJs_rl2Je|V͂HT-m[h|\ eVS4albm SM$rv> $BL|PT jm푌 Y_uu֗^`A;܁h~B#83y MޏZi,&GKPh2ޣd M>Y%DvQ'%r}80ɡUuƘ!EEi#UX*MFZ֋ϱ^B QDdB bQ@PM*ST`s MhJ! I^A~P|]hoD|ҺRn|*բBL΁^~*[ =(!=*F3&vξV2}vFtlћ6TҾV:e[z"s"ո̌T韛aAG{Gk*7 ,FWBwȹ.:|yU>=8]d,V{7ob#gXSQ0q:`ǠANۄb0۶%kGx5E>Zf~2v&w| ʠ'Na܂opR>=b)Ёo>_ބ8M?}* q-J-Jj  gGi̐ VlWi^bdGp %EWR ò5j@ɨJ UwvMr2Ǒs>/~v˲wNݶB7/fɢ5tﳧ׎uV[ Qm*h h&ML%1uJ ^`]H@ m@C-km3$֬.|9wK8qUE?X@mQ}M ޮ,g,DM?O|[`٘ #F½ݮKEx)/ٜ`T'.%L BJ Й2eްB,exџ j(}rod>qY7=\^޳>^3DZ"2 I%/%ӌr45!E2JxF@Y {b8b 4KRFyD0(8Vh(dཟ^4>gxg94u|8") {;Ovߌ3g*䕉X(%Qp(L Z8Fqh!S*+2ߞxC<<ԳzzܘQ2T_Th, :<, @2-&r/D%IULP)A3GyK=e%QS :I XDŒ mKqgE?6;CwU sv[c*CJ)ڷe7b'}i$x"i+oHG.BB n-)1Qi%PNo ўOS-}H\2B$2RWNkuxh u0d` vd XOe<TjԳMqdtl&5O"ͫQtZbT?s (yM*XL@PTNpJ]./Sk߯EDTلA5s7Gi*VW{cvљٵlپoЕ풚>/ί0.n;٬l,GmG.qEvk:!M}1?~nIўWgۋvSm7B@ե[.ftxVr:٪6ֱ\v>%Y7|_J?fo8~U{1T&mrykF۵qo:̮kټ6LPYI钋T׈g,5 k -X"BlWwg/-lU^f*أU6 ]7܏(+o{n}, 5DZjϤM>W2GIbA ^ F+T1x{9:??G=y;cÉ7;^)d, 'RFttyE,һ|4EV~v$=PUZ\]> 7ٸ Kޅm~p]Nƍ=ꅝ窌?6A0x&Ғ`Z3+Lʆaet(4=]!] Um8]ZEe Qrҕ4esOpV+̅Kϧ+DHWSo骱֌Jzowzv:WW-[8V{q`b@䧄LO?~`>[,Л;b@ͩ ViKonEmb<JjqTnz/6@54A@u ̯4=Q{q5__#NJ}!,EIA PĥPhJ/8>N <ՠz?}Wqoj`PHhYhј7@~m59Wv\ C6jIE+'Z"ÜL '7V|gBG{%-I9pVŲ@ 5T~d1;!QeF:Z$*Bb /,y+f6/k'[rhɁگ:CWeCWVҮT=]#]YYVzXspѮ-v(yOrMAu+vhjW6&oK/푺ڡ4B-tS΍VRfCWW\rA^> Qv,KOWCWLVN\y.thu?FR]j W\ ukzzB+k3+)eL6tp_uoYth :]!J]#]Ih3+T>tp΅-7eN{zRi- 5VN!|8tE`d5@ yQʞmJ:ZA u:T RӬN04Ph(#\?"LtByF'W\ Zu{GIWڒ?x P5 .Ab6&WЍ+(^/?zwr6VOdO/WAN뻻NPV_(H%-/z@n$.CI@vE\%hw8r8xDM.Wpd75V3ip9϶3 |#?89m"`i pjj^Wlt5г!QZû'ā3{hӲZs ӲJM:Zjסo'앮Dl * ]ZT Q X]`{8+liAOWR螮֜ˌ j+{hvh5:]!Jk{:BPvΆΆ-ct(yo #]IIV "B*BFt%ڏ̈́ZG%TSTK*pպ͖VsKnvҶ)Z54ȳ뉶ؖmNn3'7UA`eVi, d-V# |Cke 9@) #44ʆeDWX|\< l8vtutpͅCHЕDs]`CI6tp̅CilOWGHWriHFt$vD+;o "J-{:F¸A&=Q2VlעҪ$[4[L4br T*z ׆ufTNnm#T%̕q+hI43J0HB&s-ߖAa\]F$7k eh|4\8?IU&@3i^TD\~w'{WpP֟\c Z:*1; ROs|?%ߖoT Xq]XφgoQ*CZ-ky·S(Mƿ鑺M_cm8Ojŏ6᲼.O'~NkEq?ί w^ZX{s`jvjcFK q1;4ڒH /Tƕι(%{1b"a 4٨&_G\o'tW",\bH ?sǤe@Rƹt[Rz]jkKAU)J% 5Ƅy&dw4Z{`888.}9㧮!YhfO jXYCn$V֯^M@~z/O KLu]߻&=ɼ\4&#EϪ7E}~NtbWCk$"OCv*MI5Tא\7*QqORoN:'@h#P/P):u50O&Kp *&>M%h(8\.²oWW,妈0PbqYEQSAzj5%TҮ|\~lZGsPjk^ԏi15ZXZI}4w\ !B !?R}詠]]]O "Xѹ =(=[] *+1B+,j'z̧V0Lj5}wPSՕ"zheB*HI}۝õ; ̇lvvgb3n'Hi"KIWlVK܎夃 YdW*V{7EzXosi8g!?x4 ~]{gkBԞ巶_HBGnB2YsVhIiSbΪS9hlrɃ@MLYQ| z i%Oފْ֓Zƿ\4>VWzS: LB" fh0CvͰ8>O zl{OpnS#LT[93gEd[ &Oq@n(zld/N#OBbF祻ͽպZlH{3EeRO ~oi+"1P2 "h++#h ^X 6k4*\6NxFvQtkČQ9PE9c rd $x.+, I_@X O;uJXT88N_XszA lbe#NJEKV=Ә?6hzI}kRrwB wE? "QmˮvH ?ȖRJ.A5ŏi'nvjj>L/4{Os?خ6_^hmw:x~xK\:N?)?̣eONY<)I 8>NG: Y2z8 @]P@ )RNUn:Wz N͈Mֲ%:ѵNn"9$q0 Lj43lȋ Ѡo97=YKԵlG3)dӽRvof7U~ǽ!>CCpffظ@M]*~hq m7g{mH+ӬXeƀ{Q4u+~Kߴ~F9Mz.BO{cF8~h>56/D|D.|-[<p1c&i$Tt4A$\V[(kwN͆}m|3]0ոi 6LRVoYyLwcIE1v@ ~>`ţy]ʚ_hãylWnsg֨;6iB 9vJ<$c 0*=]R%T)gYӕxFK<ѕx-z)Z/yM (An=KB+}d Bmf]s$8e{-[fliwӋy>gLb2\#ĭșN [UȂp* VT}Hζ*сʤC<& dɃ1/EV"-Kpk8+h4 gVߍ^ 6EbWtrI DJ몫'60ҡ!$c58}ڂmcf>$YԮdgVP1Rwؖ۶ylA+LAo[J $b0*HDJ1L !qD Ʈxn[:ܶ|\}ի.t_ԼBd23] bfu)3 Ahh+{דR{a"O[orypv]_vUJzty2ߞ&*\ނ9YX#J&.W'Q3Qs?0Qla)EL%D6{ Dd.`CiqG.{VyE]d .zPzILANqP2m})zK04;Z@õDYKih l#@v6`v {@ UmpՅ$k5C}*h՚(i@T7ӯa L3h}>Hg΄;fHw }ݩc!]S A(3$YtBAF7!4fA-XF1fE'e(\>%ih4C1q )AFP2[#gσ6=[5;ddPҏm)8EݡwAˣ]/r}^˝5m:.[[VIǐ"B`%_IHa0d0ZI./ڊi5^{"nfeXGN&7FKcdcl$;rH$/`}q=Vk*H7 ,W ޚ3^ZXay!uB'hՆ[2I:<QڨYcRx߱v;nR[>2#,T`\1Dr7#BϬIĘrA#hBIpqAvsԞ5̙eukT7,pQӴ2E?(N3-(tKY&bxZ9 %ɏ7+]IS@/\8~&Ȃ4īQFo^'O!P2;97ZO&2 nO&SR'iOY}*G2X >1z}DflH0V-[?~pZW5~?<&yZ;u1|Dk~ۼZ 'mWzJ|]4, 0ey oikho1rڜ W9qv\a/^bB,_ &K=ҌꃗL [>}߾jEYm$q YبcO |6 L`:hte2# R1:¡Gf3X.A6#)R15Bs٣`;iL*U6{>ʆK=d#vʍ;&]"v6.lev,bph-Vd6F3ז6!%G i$2t[ xo, g f4y',cZ萅QE![ݲFc )8NaS!! %eYRA$L9ńZYƩNXCXivzEO[`6 +FR\Db c&fRI3ePdk;B+2Fѽ-/kAV6C] b{s̽c)X뎹KYÿAl\:6s+/rڊPB Z*iU2xWbr_24W!=;䙵_2dЌ%ArGQu2kdO!2:O?$#9/P3=TS A lhԴF)#9ir)q45l L~i1\WZ=`ͱCG/,s#!-1 >R(jHrhiPex%%h-'IOI',QƤwӟ^5gnoU-nz%KLdWq;j4ؤ;fCdy|mzzh `C4*ws)+ L׻mZZw{?^kq868_ Bw:\W@UW@OIMtr lr, JF6j Lpv#c?_vC;c!,<*ގ=3~B͕pteZ<>7 x~xa9c`)z`qW 'ئ5u Hª j)%+b[o+44QX`"%FdB*NP k*U%pv#6/.=9f6uFm=|RO_u]|6.|Anr>Uպ26Gh{׭~ C|kaHIZnZo^b6;\u(-'iƻ;__ڣ{-bͻ "=>6Wbȃ|G9ot#˾z-oMs֘|M56/Zt(ʾbn7w`:7|7LkDukܤ4C ķbFՏ u'{/WǀaU(jϹj3 6MSZ2YaR>ƪ³ktO;w!v;3[vcr06`ظ ~6V.YǑyt|eѸ7e JU;5i'^ȱK0$4U+za7AEe_G}F`;W`rP1"M/s.M9&&UCURd03F)0T.{tt&{K6yaqx-~8f* * %KP.`6Dsr4Z\wB]oNO 2V|VJQt4` > [<\n gm;1 KjJgS> BP7Na**:򜴯1iM*/)aʷQxɱ)J,yu58L s ѧHQA|(K Vt%Bch Z5Z^rFYVS:9Wc:$ҫN! K.+6Ġ!YӵOTmFgCdk>o-#o\g_0Ӄ^O}n~z}Zz2`4dp-|Ssד_2 =PO>c3||?Z`9_wKf㵳yL6&}#TU3:;fXUU,zWZCN*EԤEWX75<c)(aU6;m2iѷuұ.&bɪ:T) +Tpɨ僽fO/ g T[U~ђʷ\Nm}3OgsQ@=^(| FܓÞ.ι[|qJr+Q?e͕Oy~[w]iZŧ{5>9[}A[#pJP9d &S۳Ż{o/q`=\R"}˿?=H?|з> ᩿籅 Gl,xޭ,QR1jèK1)]B` !e.0r ^Č$ϚB)t`Le/VyLʍۛ&_=yp5ny:rnxFFztd1m*J=\a%) "SE2iѠScG.|A4BQg|dD5ZZ `ъ;IPrJSI Cc=tgpnj۾ہ 0 &ڗj4ޅFhۘ1rZWk\$W\`T@(+]oToDEB:=hdHyvZήV*#?ZU 'Nvaky*ع*T*Vسjǫ/L hYTqY6ȈjΕ[(j#SO lr, 5ڨ(3Fn٣g7_vC;c!,<*>_yOźysOŸe>_>=7 x~ `m!Ե6i',z`qW 'ئo5u Hª j)%o dً(lii#d2!Sn|B5)yqQ̹v783j@n-Mv*b]VX_^^<  Y3673bb4CֲdE%48 F suYzb쌇݆W~s;\}3"27&bLe!d6:.޷{QSMjJ \hgZ)$n\X*BMm8Ù"cdOIZ݆GwEj\\nf%b茋am_9fBzO'A)Ytʙ-,b.炇ݬ@<aYa.ȝ8IpV¶[ p[ .OCsh}ϡ9>P0ZCs9tZCs9>p ! Q}h}ϡ9><ZZCsh}ϡ9>ZC7Zs0>ZCsh}ϡ9> Q.} !7xybR4ڀ_@X.SkJ MPSL[0AYSy:a 3 '_c8HS6 aaYa]GPN+5JP]dc0%eg-A!2fW/ak3 pKPlAU]wW n[\euSum/aȿT%}22IIZnZo^b6ۃ+yl:TݿmMwww&+{|v{Z,yw3c{ɭVT=,4LiegI |Mܒ'=tOE"wMvgQS-Mr06`ظ ~6V.R!E{HrUڃhܛlshy% ,pUo_^7]N\\Mp~1"NOLO! =KU/]1"XNc꜓9gȩdYɆn.bȨGi,ӯQhN#m}i)yǁ0RW/w|7CVk,>];-gTSIS 8'7L9Ryg(7箸IiOj8(_]oHWwCaΓLfF?eeI#q<߯eYl9$Hu]UW}*c^9*tU$V9ҩNY֨{ hs(} f3vbݷ=btYY׶^.o}-a,GDH1ӊZXH$ @|`O$VQGYPFsEZX4wp00ÀlT1#^[bDKУ83/>>XLz"D8"kCFEVHFKb*,/Kݘ7*ˤ󹹙^V5EYWN/`}Dx2(FQUir퐥L&TaE֛(,U4l[1ZleF͝QFGR&18dyX7Y߀bګկԁm1Ey A=h KhMc\VUK>i/Ub.\o"26\zs837cJ"D"[~;K%aéFN%>qsGzc &Hm,yĈx[o[b?|bYYdzK3 S*&rTH( qX88:n6UXz]I"o@P*W+f{n--v}!JĴq:WApD'rB-xnd4kİ8pd=N=T3S=5~s& ic,Ԡz\L5=dx3vu0.v..=,浬hn`љ7iMJ  %FViH$( 51Ȁ1`)Η:|0;klb650HATD2'Tr0# (BH&FG!cP0qa$ F2ʺkCǔSL o( !1UZXNkgs-M( BrÄ8@A섵$yK0mYEڧ$K4a0IM.6% jP sMEo9:fqW f9o<:OѱSMd:SgI6z1OupuR7/\pP YY㭀G'grME<0a FVqd)@ IW J1u'f2 ׶w2> }Wf*S\85!Fqz!q빋"2:^ӂ}h|-SpLaD;~[^aNO;Qg#xnX!\ ]&N6:q G_,ܨRl噪Z_問Ņogˎ}a$W%M!̵~{Y-Ջ] dǛiqL=C'k{b|sOuݐn$fy|a0,`xcpп dg7pRJ^gYkݳ*Y')f::0}:$dØ%'8ߨ[0'~- ˙.a: Ň>~H⟟~x<{Xq#0 :4{?.Ջ1(`N͍q1g?|~M\_SJ'EϾ4fJoW5Q3,' ˽ُ;dS]Cb;tqO]M]M>rM]%s׾0! ܎K'7D=/UoQ D\1Di ~("oDiyvI˘@kБKܷ鹃 31x4BJ"V3rY@ p 1q1c{uM=8 yG+`a2;p.`9Oǣ~Ny$"kY">i o8Iw)R^$M/AvU4Lz:khlpkKs JtQ"Ji,*H8 ^TYbDR+mhNGCh7bc*h%{gqoR0#4'(xAcQ!Vc:l\%i~c3kr*+}ntF@0HkߐZL{ʻx ~GatcC zU- 9*^ 4÷s 8y}G<̇{JG1E`aR%u2!pw7oզlTj*`)K0z C>lL|U▱y_f/#ba:K-| `"mO#9d߯jɒr˖QSjvUb{˽fŲ2h0.JJ*NWcJ`1 7fx{H-יP*qD檠#=D Ivd@8Oјjǎh R )کtDSTr@4! -;!ueԄ>uU'&/Ԃ4rerO[ ME^~>r:")'_fiӫu8fR~_Giӭs eT2*;˨lgʝ$x<;*퐚-5U=25N|,{Ǒ+˺YGmnQzU nֽ_2zflIffe$/ 4go)Pie+hr*7:gј]v9bȒ7Fɖ{BT)a C7`RQ Gw2F e@I>3b+Hȥx=<ꍜ5H'Φ7UT< iBUqFcrK!ŧ?T_(mw2%4Q ᴕ8+21 ٨hhtJ o6_Iwi ~}Ȏe2xnV7vpǼ;g`fd'.YyU`ϿyS8IQΦ(UD֖iO28L$ӥ-Qh1tq8>ǽSM{wfϷtl!\6i[l]~0^sFBd\P@i*x5 tVT\sYWxɚJ$,` Gڿe:,z#Mv1%U ܛNYmNI&5W:3 ԂF^݁RR* YV*n.PQژE ow`ov64+dPRnKyUNvs}> mǁMc01_\2fz71?*6`^jiK4*@U:ʼ#7/љpD%y6Y_f2!K!+РcTJ+TRd~"I!A%ݯyiR)iU ^,w)a.F\f>w* ,teo6/k hɰ}EgN{qrYˉޫobզt|k-;L lm~>WXu6cf>꺞3.is3[IdׅCheOvȠ1TT4bi9'lG+˰}5 idE#[Omy|;xMhwFnte>gILcbtr7~p'kO]w_>-n0>#6V%͗ 24x{DEjlʣ#[$'p sx_>7Eqڽ|fӯ t ɯI A ma@eHYr0,8 Bg.DV&nȬGI!ۥ~\t;b6b7i"*uw7>FgC=79Tg%f'P 椕N!RQcp[샏$'GAJ Kt, ¼)xNpy(0TH|`VE6i2WUWRf7y'q杲r8'-%Y;/\r@ccN9s6,ELO%.C `]I*iLB9CfC L3I:Bϲ9}d\rAggaN[V#ȐFaNVIc -B&|CjB,T=ْcb:&敱!r}ܐ+L)B`3}i3[{ }VȍNro*eJƀ}2&<%*6Fmѓ(1Ȧ+>QbLVG2\Jf#YƑZ~\־㈅muY[4_HY=,lX_d\'-*~J6ˍttyu]5'5P^OE1.FyNWZYg >e4qU=w}Nl9 Gr+Vz2='H9b#Y,H ޡCNo/9w1#H[g^u<8Y#+kkMp0_- ^Er=V).mjw;7XG%.ѣ7< ZF&c6'!1&sVhIiDr+RU1:rt,jyt|)|/: XPLz}nù}  sń,0sV(* #:GaQF zlԊ0hhwO@1A1|/l5etAF` Pe ls`ZˍTsY`z }Z&àQ{JodNjUHe*EwЈݵ\)*2sG"¼Aٷ¡w:Y-%u'.-?D&1Mx/ؤiTxA:5|޸q?E(=Z9i*UeP;-y<+{ J6O!P0c%>rdxBH{ٯC_As1K>Vg"uf maX/M|_(͟a("c> NK[UȂt*qᝐ[PYi~[92Oq' &x6 ݪ/rvKv+tzѯ|}EwLQ-b@/,''tb4S]=L%e2]3+9[D҂a%WǍ-9k-l 7(搕PAD$zc@/3}2.< nTLơkǖ\-Wks:_oxVx }^?{ȍ[\HCIvMrL20 $όsz!?݊i Ɩb7X_BOmwUe"J!ihԺ4~q[u͕ruR ݸ`® gc׹fmE xc* pf!NBŏDѥ8PܢrOQ܆X;ӛ5e FWՍs2cI'oI$D^y5-柧0k^˕; tYW;8j90k[eV\꺾jFIJޏs K>ߍ>:7Rݧը6͛juM_ۛ >p oym_2}|wo/㤰N# p˃5g_r`|\ !=E4SqVaV[ w0k %lrJy\4Mi +v.1! Y4 / ܿ#N|t՛xFOz)p`k !ÏMQP io<8!`,I$ {e(鹃 sR7G=`&VI$C$&bpЧuIHbP ZIɚN:Q/\]ޞzaumVvzʳXP^s9Lڎ`EݧLfu:J)j4h`Xmu5 hkXb{Yxv#$l$˫Hˢ}-iNi-sᓋ``H{`y:+OYXӒӟf ިε9_Q4  n "]: Ar[Bh )6pMR}t:R{!+Hga57-xMhw^sx`W߾_XqoXFo`X|^buhV|n1݄gWZj%޾{4z] AO\m,^n$_4}|{^Sb NI}HQVSm u񚞨y-һr[Gh/(ѕV`Q:Ij׎{tC($FTu&YfkivOu4*QǸ ָ yVH  :]SyǕ\ҵr0y^yFaJsϾu^krwzUcW⯾ʉ0Ru_5 jZɞ~8)-OuޫSQh|w ELl0x@KA^E {GTЬ+Ѡ:"ZÕQ$D\q*<̋PyOvhcT2=cNScp,:gDB0(OP\kSX׉D;Mvʂ$jxLwlj&9+HVAZ Dې9l%8JLhKoBv9A |\]*Ԁd|Is[We^iPIAZPeu.ZWܯøIAB4k7f;˾w6N6ʳGDvgra7D,][㘜6x[ΎsG^S])g֩U 8Υ&C|p#6t$t!) 3̐(NޝZF!e2/G8 8ehzA08l :%a!EJaQ?}ז3q*FƦPul UOj M/gr]{)\wo =;}><0-6:>S)H \ sXy;Ii4 UE*N-6< | wq&WpJ`&@BlGIL(J>&Awl;~Fð1b;ZmjCŃkPRI 4&9N8mШP/ 4K*na wZR!P" x"DLzO I#\/=LadaEl|ljuQX,Dle@jLpLs9$3Ql cDF;(r,e;2t2wG$p TxXLЉD|⤞)WbM.b]k'M "< "h >d^)ԇ$IpG#/v)b_agcC{х=2= [l ]y9f|PQ/9ߑޝئQ4yH+ُ/(bP`ag+la=kjw(| A(!s3*+Swsd\Fs%+3J*Ms2}7W(WhK0`WqWW%\*E1^'_ƣi{jh?.5:(+Ϲ+byF吿ΚfW; ne،m} m$y:a\!pw&{X4}~{faǹ"\ؕqd|UMN?(f*'ՓM^PyR.Wָ]Lell!vp͒v<6eγ7mR~&+yOzٗc5)_'EN3|u"qvT&,us>6Eg5+(£MĕBOqA5Wd3IULR h<: 4N9opsU3R{CjʿixUYJo{78jJ߬?li=JbdR2vsܼm̽e\CnDfCb)+nA-,[Xx oa-,[Xx oa-,X[Xx oa-,[Xx ],[Xx oa'ꑼ~D/(#/)q "'~!"GI +)BMz%W t}rprcB8RWkt-֊W42[r reiM*yu0%!2YP)X2p `O:YIldet t&N.B$ &`8 $gJZfq"YZ:Fs?E8K"I?^ dnSx/L滆ƹքn|qN<*N Hl!(TWƌҜ,I!$Ep 8`TN( ltݾYoodercUhmV/ ߖp˄c($*]Ԉ,Α,u25'][o#r+zLފ9@9/Y ,i{+y-{f翧%-rjm5E٬RМW{сMu{n41ICST{B@tCtl+1gA SX5ߎg{b9|ں>H,\(&贷9ubA9iHY%M @2t)&otB@ 2mI;[=k wy[<]Fxtʳyg?{ϟ_Ǔö:<[W#p~7~ ˖.!q<*w9BǖanNVgy%vr7_*;bֱ;Nr{S6g'DW8U#mM2"AZQ7 4 pܑ~'K2rЌ)Acޖ!J"E] BFYNx欔V-BHEVqn#t5pZCL:L+.b-Xc )$>WJ@tml8́ ݣ"+ I%rv/mdvs6}EI%{#χc'FHGNm]:V|1 2$_bi`R!8H };@^K x)P>hHa|aF(r"S Rؔ,hLdI&@R"d-r-A_eoG==ef}Vfwt؅ﺻAnt:*V W?T*k% nkfz{e-wmz|xw =z^k}=\O&[[^n}^Y6s a+ص4l;is6/ϯqZ~~xs9(X|{gs}\8\] )Ǣm-RQfʓMXDxTӿyyUXK-|6!kPЏ.1ҕ\ ZM%(TA{Fǂjؗ% )zb?I^r :>UڃH7eh s" c]S{ɚE[o ]B+4:4EBF{f7LBS~Ɉ>ιO Or8wX)Y;=,\z/ b(RA'V.>뢳'¥Ĥ ;=/LF$v m ld IQ(gXP!fRΆGi-s5H?+L'^UҔԑ\0[A:ŗ;^x+U{&G 2R$\0tb2*#9'F% "wᓴ&dY*%HOh!,Ĩ5dKYAp 9,,uΆ'~AI鏤P"Tޕ$-''F$kATUw2/ \zERl򗯭P~Y6ʭzb=<&KLR:c$[.bd2) ")@r5[M-+3˘&06Ȟu 2HEID]d(Q$. dG2vSMSe{,Hh mU{c`<9",{%t<ν!vo?).,qFk6LKSDBI7ds!>ꓖtx_P>Kd罔I$DBFY4fgJYIS2xKP:*!sI7KZH_nwiD6ڍׇ퀴N {EIBW+y-Ih{:J0ih91|ҿζs'L;+)jL@ %hU 6"{ɔq)?wFW |Ng /Yi#Р=R)zsJ&xH)Z3y4: IyklSzKiէn\}kѦ% +A&S(fkLXE9%I"(IINddA*4Q4B]]z`P E1, #Bu2¨!!\RW?`iHQ**Em)|aH8h5c{֊HMɧ=Ljw-ISǴfX|ԻyG&3xF+uPM"HD%d4זQi `K|Ӎ\ߜӠzZUϠyޚ)@br`K HiQ!,Y@Y X ,4@n4O1]uv%*HD)'^-AQ%re`32ÅZF w&=1τuZ PЮA= u.p,dGx %^xa|ab@F \LBuHd"FB'D̠Ɵ\_9mz>C:צx2b>08PfjF ]SvYW?~{լv\bXU(Oc,L?U/Ok~‘h̻?☸̃&2xtU4:_tOY?a1iʊYyNn^<9YNo&WW|=^ΰOjɧȋfՋ)y:}[w.87\Wu<<!psa3G5x~9[==%iu|[>1ƀ}GSU.۫O&?ldw"kI,-cίƄ,:r\Mh?ik_KcE?UOFu&,^}ٻ6w_D<7@{+yypk3)b:Z|Sץ.Yp3LJ^_< igngLG_ W-}^́V[B2;?4swGl8.ŷ6x[/yH˨ ګLVqOx= b:_&F_{XvӭIeAta4Zi39]Ml+G0*Kdq dL"*cIT?b&26:֖NӞyY@?MZ|`,*y((ZG3C4h 0j{RfR%Ewҫ+0ffnykO5kw66bozكX߯fb@9Գc\{]K~#J5C{Ґ4IXLDgp6~xwV#!Ϝ#hL}xȍuEL: q4NŻuw^&C6zMgcMMʩH%Fb$KHE-s1D #J 8UX!lfDM9_Zc}r;%YyiuBHu,U J{5 bIZrABaY)mQƬM9@>YbItN)HVǚ u9[Yo@kҖ3yuP:z[9żt;剘 RʷpXi.#/g9J&5ӯtQl:""f813rB7vG*6 4[4$!-R P%~'E  )n1.LJ:(R(4dyRiIb67W-_^:h2%Z=Ⱦ"+$Cd%g6q8*rVpa~G\f| Dղ G+":B4m% +Rs9*9&(uc#F 'Ϫy/z{uOɜdjHbXbLhٚĈRf~F$i$M+.b-VD(>W9.]ΆB=*n7}SX:`gXpgq }N} N|B3/EVI]V5$6D xUeQMǔ}XڑjWZaP. -"IGZIOVz"I'(j8-4xɘT5qlFD& WA5L)fKjg 䠸M^5<=l5 -0w*\^QmUp)!^K:Z^nʡ;,5Bqh(~0“d[ 1cp(hvG 燚w6|aCMu} {9kq s!sZ<:zr@R E)*hAMp`b0$ۊ]f pu?] eLG/nG޵XFRweGKNse č(|FcOسYLG+OTZ/}? cCyf0m~'ym~l6:|؊+qo=~F%yQZ <=H*~qĕ},Kmpj{-RRi]ko\7+N,"CL3.& `Fί~H-[WjWC)K]R$v4izFPMMaӜKrf)i[AߪBZ# YHktd!űΛ]I9_Ֆ,lSv/zd|2R!.)J1)C!ﳷ'Bۅl| QƖRȶ]ZLym48wXu:qF[l;;8^J\-ǰ͏^q?gtS1/*෶|>Ʊi; ]1[Dz:KF+8ZwJ8՗#mrN*mRhe;"^6Pr*U۫6c{WyRWjJ\-֕@U]J3#IbZ>f?]E3mo{nW}o-x50-W1#k$r}ORTYjMbQMlU*g-h|ŁZ N5M-b,^ή'$SFHzG=@~VZ?wsA: nSzgTB4l|(5eD*ٳw=wxG&6y f-qkOb͌붙6*s2.S, Q?GJTpIH٥kxU&uq!ۗH &Uh^ǔs ŀfH˔zQXZ _}I[{ L7=ЗfR{6m8߻ֳɿ.Y __\J_^l`?F7;?NÂ|_'u~"D;A{Gt_F'iw# 2J"αy+2y'փq xStw"Nt)QJYJ| pG^~xMzxcXjDЯթqٴ}|<-Qim}?tx'*4ϻvqۋ#NC'o.eTfy=ҧՓ}i /gKTo-x*>#+s?ckخ,L8?=w1Vj%c[b{}KCfD 6j,?iK=<FbG}zM0تV7PC}Ze_O*~]Ho|rtWQtxA;49/_X/u_w8x:zw߿7O߿ՏoH7/͏GYq! wp19{gCk=,η/% Ծ8? `ht_10,#<#d޲iMܥ];f˧Y|Pf~GK/zUuW[ΑݘyTf|vA PuFIDTJ!KbvJJЬPdlH_zaI.ә k~fJ,q?Romb YզՊ9"wVJ3SƹsZΏ嘔lbݎIyur5j<na`:ޔcX:dǰQA\7gpYM:Z=(fr0]Rk6:\1 OmNW@i3@Jl `oUZb0zJ8Ȉzuu/>utu/A٧~JAWN=DW؍7M+DNWv[zt7tH7:\BWZR~骣4gHWF!At-)cUЕu*e骣4vKWϐX;MJ؆͡+ul ]ukW[zteu0,DWDCW.UGͺPZ]jtJm]u.l ]u'%-]=Klß0PvXm?m/1Գ|:-}8='=pJo7=n]&b{Ii%&Ҕi,.MCJ`OAp٤E K7.c ǝZ^ŏ:ӽR?,rtEJj%nBFqrRTI+b@y`$ :Q%$! !lE!iĔfpa!nl 6YmIK~ip3.R*00pc:6HY%ϽLJփhinLRkS`L*F7JF-`bIn-֛ א!$Ր-fm.I'*"%\tɚ%ep8VZIR`KA(™1MkGfhf n$(:&ΜUrR&3ZLE`&$;_+_[7 V"G@eh}O׳.+/>fe~ :$XyK)qȮ@)Ybu.ܝ7!PLjjt9& "H ^5@r-PZQbQ[Mɇ9m\ DHqu~1ZMĄڀj vLc,Qb69ϪH1US*AfL c&I`E2VA`FYR4BFtTX )xR]J0I( BH f)˒[^/ml+**T(:ø3uXqx9(#ȩ [.҇yLGMrcaLp9 #$l ղV0칺 ^6gFV 2;VYAPfa2T?>TudK Ԃ^[{ׁ16x XLjX*1+VK8l0`&T2y\ yOȠ8P_Ej&AijTS D%!Y=Ec<ԭ}x3$nCdp6RM`ɨJrB#kUgkשׁ<&KΌf(N6}Oz 3󈓹hŧL,Kе2^R֣.Ib vdb<IrC*%98]%@_oݒi!D)Hʠvo КF6A 1"edrTk@ȃDH!mvrm>Q53ZM4-IKl QH:EȒL\5v#`m4LgT)IeJδ$IO{n .Ƙ2ӆ1EjYlFVY")Q%Mʌ7+` ~B@-#6A(PDBIAåcAy0kf8p8dKFN9iN;eמL%qDjQp ̂+7 RT[o${ނ~N*!"Fʍ!SeiS~W;+J_gWmz4UE8Y9x0ԥ.,w7;C% ۵@SmG<Y+Z@; KVvb4@A0Fe z~jH f옍Ԕ=n0)ACx +`*CetGXA 34EkKњ:)a .I+9;]I'`O]s]k>vwU(_R\w_?t)epP cv8imsӶ=Qݘe" +?_`6F1\;"z@PgL-P^VYGDI3D?̽i> /xG^E$G~ie[ ^{UXY-EaN&[j=bFJnFE8e*A$誩5ZWn !TUb<*z~ӭJ-q"fpWy2׫@=m}7hes?-r 4ƣS_׿0gRq1~rmK{tġ?M_fU wfw7O~n QqFk\Ө-2.;,eb\+c0.(bHH<,Д!&l2kc6)׾\q;K"of5:0ԺvX|B#_mL`\}{2O!vYxvηjWsGӖغ\/n_&[ztqWOXwpfЛu 7y#Cr!,/mS% *8 ƒj~Ͼez~۱¹qEw'o_.%&v1XNx1Sk =b% 7!zWjIq:Y1%BuzsH**a=| Ҋ5,v5lJLK~{.\}YDPzy4,&'fpYWydW{z.KmN BH)c2[KoXJ!kdl0^*{뽙_7h](D|m0Ia}9?#f]ЊqUr I>AJ<*lk~硧KǿH1'!s$24@d42T\$4AIzw{ݭezr]C$.OciGnU;Njr|&4b_㡁' %CR^(.UqD!qR./:~t6zxAz> w6 *e(Dovdhp- ̠Y/u11e}bUiIq3׍o)Y.KYcab/7pDA&WP^Ŭix0Z*4yDo]sG A/VRgW/ N•$KעNѼIӫ+܍Xͭj >_VdQ{nߗv"ΫV%Uu}L'1&41t)tP3MgkXcI˛?ѥD/o˼RD -xSA嚞ݴ`ZjMTMjI;}[SQ)Lmwio3f_ow;Fsy`og7ߐ.8}9ì(›FK.}rM2 >Ɋ5iN,KG }Լ[Knc,tK'37qwX΍M=mdл=DžG﫮.\AiȚǚKx/mk)oR)p{? 8۰1i" Č` и2P s+e2n>J]VDZ,|_b)U:$+3f&0Z3&T*<{3g 3zx6;pNv~= -;t*gRRƾW.a;Zب!T?wރ !瓨tT3+(̹҄ hK*[[Rql{"my-kK"@ udҳGUH{\i`TL *b-<\[>H꤫xwH@Q#aJL2D :"$M4h@:F(ۨkK.{r:u_r;u jgҮ؍I]3VЀ7_VmTNU{ЩIevt*N=fw}f1e adUREK1Ӎs3O1k=ɑ`MY,5,i&}6Q>pi ԥDY)(4tހg89IĤ<(<%Qf=ʛѓ=\WntKFz8>^8oeNoi|e2rEEg佊4hH*gȑ"6Efp;d nQttĒx_[)[NHnWW8+g H$?fLAȹFZ%Y cڲ=߇##,E P2XdjRy7EOe2½hVHpq]F}d#{n#2$\jK"<*0)k@XO3D/dT6`iHG浗ZH:34EE)&ȼFd4gV*s=&D;r$dNSfT!Ftj#hB tA)i'G"[#mVsL_(|?ś*-H! '~p+pOcYtX9-8qPҨ6}A8!z3Ep>0gR}n(&8㒉\?i88LIkf?aHTd.#c<2z}"6WwM4n[M[nn,oc_ENj*=5+BGG˗ߴ*!-Bqy7f!֋sZ'gK\,oB>͞tٺ~jo8]x n6ejјSe8'wJw9KΑh<|ҹ$[ +V$ʚUՈti!ofYX^caw[4(*>\zrYt8]9p+kl}NuU_`겯)klkEm>wqPkjH&6uqH~,_x~ŏra߾o_zA+ΟhiQ8X݄߶o-.Ǡ5sqۡ//F$}^\laYDAdF`eUNqVĊaLcXn __ jUMͫVmQ55{=m5޿\WU!V)YڍL ;;{ʳP^s%XՇ3=?M:{D ^Ӡ=^!nF-G$ .Ɵŏ:[`[n7QY e.& !ѸCA#QO#P7S[xCdRrlC>„e֋ yR1p ̓T9woЦ}8,=\ $ d'ld!h-R%t D M$fӡ4b=cSN`w2fL,)YjuRI^yTkN>j?' GQ4J N@,gEf+ͳ%!ai&e6n CNe >H&~oMz043~)b\"f%,!vm\{tHH aw:yqhҢE$Kh IIf%viC[pnp>j׮8 u-j[u^<`knoמAns,6em$ =ty婢w8L鹲<~& W?M9uǪW߽ޏƟG+/|4cMsy,:rѫuAD'}-X ZrYu/jZZFKi RikU,\kg ǟO?V@MYpDobLQ(d {iGb{nQ-JoM`0}h$C[6HK2TY烩\.qkNu'ܢ3!u.$N!2ki0+{ݪ5g [)=msq?׎t-^}YAZ{hɗ?r]aK.>^r^?k >~4>uɟ4q4ldnV@bSN7 o'䒋 }v^՞';QA@RFl(z'32$,̘SH""*L8rH"=\ab S[qRr̉` ڑ՚eZOiA?k=yUi@' a^YE60BS8AI -}Ek .VzQxUIdIk7lBkwj扊g>R|ɹ3XDɜd摉lHXTC6*Caa-U5 W`V{Zlʒ R= RvA$<`2HϹVg5gfUjqW]h*BEJߦ\lm>O5Y=;cXc@0E0I"eɎxXUcK+a(ƞOEmJѨ`TduY(ʙ1Ʈ֜;Gi]M:ֆ^kv30s͒FA;Y?]M<kCƚ'&0k>" .!h!IȢ( mXH)h>Ziq5g>ly&٢E#VjD[Y#^#q@DfLaq:\ Jcs.mhf&'"@4.;Ʉ# = ȶ QcPX.{ T&3apTh/|fn S"4QTS֜{ Ɵ }#Ѝf`%ӨE 9Vl$x}pZd"%CUglJYIBT.->,m3yg]^<{ż#7JZvfЌOpnK}BrT59`ʍ5AdKd=ݷU)XOޡ\KO(b:&LI)8JHsd#r- s}u|7%p#N°zM~`;t t\rr_׳a[v[={$gP/& qxBԘ>Tq*ފBƵ$Yłl!-J^zr5msmX~ᑏ}|s?0.V/q|!)',(Lh ),ħ/d7h7R'8{u*sЁ͇[*S BJQF%@0KnCBe'0 gW 3ȬB[Iꯋb}Ul,0~fvy*\{N!2CƒXB%');\0'tJ0cu&M5vMy%h&!7F ;:"Y >#H $|FͮrZI6{fF#4KM'zf6ɧ;7  hkq&"v<ڔ٥+҂)k.إ.]{ggX?Ff_] W~M~ʍb+eҥApk?oJKEE5 RpgΔyp~;a = nn\VnH}?حڶ\} YQx0.6tfqt=U? )o'}f-]u-'F#QM O#vi -?W7K{g9"x9z쵁)xE cWEe[q़^4- _jq%]qW*B^;\{~~F;8ZAS3*jעAh B9ZY c4Jy1.fкX} uE6ycJQό"j+uŔXuRtnYͱ3򀣝x>O;DIDΤLe(fw]m'7UW l>תѕz,]WBl ue]IW֑FW 6jѕ[2ŪyJW+Ao 2%+1j pjt%i)1(P9 }2銁'k 7JST]GWѸ7_8\LSnBMQ'FӂKVJ-]BjT9KMk^^rl?mޞ_\ߜv2tvL˛˳Sxϲ_z3emlK`/ޯ:|f$ Ҡ, ڿr/>"tm\.3"emp |R&c뿞%'l`9_-}ghbKh6Im'ϓumVfk_-!jV}O'"zE#մh3-Z*E#XxαE0?t03'6iѕzWk?uEEb`z7] m*>`M4t;f=S7p[YL43-吤uUWf=߬*S+EEWBוPlF8:=ѕ"hѕF[2E_‚<jt%FWLJוP:_u5C]CLt5\FWBc`f+g^=AzOj=Eqtsi6u~qC.M`E\YhCc^BYڱUWqHWLFW2ިQ$J^Y'Up<\XWYnÅ(0] ]}JNI7EWB;y6T]PWyHWIʢ7GWB Tu5C]99*ҕǨFW]1-WpUWѕGNSߕQ+EJh]N묺:0X0Jp] -AbUWUF#A?Xm-FJyВ.-g)MbN)<Wҫ怖ukj<%GL>F%\PO.P6-bh !ѕতEWLߧJ(R45銁-'\fJк'M +f!p@5ܨ1(KS+h^pǬc=g!ǩ<4&9hD}Wyf z+[Kjt%Jh)]WBYZUAte#Dt95\4Zt%!+Tu5C]dÅ jt%JhZAGkcp )8E)LAA7{|P9 %Ҥ+Í^K וP^At *n=6ȍP%ts|yN)[^Wx@KF$@S ^O࢚Ah:%2ָaqC$ƼM'oEWBt] e*Q.*ҕFWF͌"*]WB骮^ŽYAWwyqZpi1,Zo'Z^G 2tz0)e0$5b\EW uŔ+ V85ܘiipyPu5G]9"]9]1ujtŴ.uŔ5Puu]yDV+%5֘ou%FWsZg k ЩѕF5}WBKt]1\u5]hj 0z5ܩϓʣMt] %,u-X\]:1ħ\^n.HE ?+/ŽK #o -G ,_}i<-+k2k&+:OjO)ERԒc`AMKNp9BTzKN(-tjUrYT+iӢ/~S[;+rHlr3%qZtŴ>? )*ظrDu&>O*օeQz[bbվYТ"] pjt%SOʣj_ei+=yHW] n Zt%JוPR "] prjtŸdDWB]+Xu5C]y4jb`g] wZt%ѕ+$_u5C]!C~|`θIZoKוP]QW2y^-V8 ɶjH7yiZ>;%ELL*]B9jXTMHH⎫N)<ϓڽ@F4u"GLƐiy~ +WF(}학3CƼ8]ƨ)KוPb!?]qlKwc=)7Яۛ/n6nQօKoF~s͏ n;.nysX?Aw=lЂ;"cbI|=fvmg@N⋄}wm+k?%Ih>|B$w_֞q֥r.-TT$Cן[~[_ŪJ!- WpF lK;@'@r| х--za~??,xzQM;-ϸ}Z>lU׶Uᦢ.nq5A7AOƸ6i[2wϧd0`-؄#yVÍΣjJeiURRվY "qqZt%J(U]QW@]1nP] m )Sfןf]W~e\JW4?l'lo.Mn{߼y-2lxυOW Y_%%kޝPtk]Jל]tMӞ ]޿ ^~Z_?\]5g뛋+.,(IYo<w5Mwzo;|srd~S@-czu_V޷poKה{q/6Y4?sQ˩7[a|zQy*nŲ{*^H>R>~c>}zq[e~߸ǽ&5/I+yG&~8} ;򜣧_o2q a?].f¿\k,J]Ŵ $7m!3 \@qRo]7[$|;.-}nFɮ>_ F7?m ^\-O;\j3`zCZ}4-5l -Bcɦ4x;ʮ?m$648So!4s ?,ZM]9MpAzfXOt0_ ~B>+s0yn K|6Yb 臾47޵UkٿRph?_-шH| ЕmV:$Wru#$;$HћH$:lo!f-DZ˭!BMTDZ%[*c0&jڠ5m:hѵѦdةrޝZ; TMw Jʵ@5M)SœDhY{=b1;Lf c7kD34fw6EGX:5G@x$5y*{څLu hQJAJ(K"LEt(.c0 3}c,0ИU{ SGc7*>#!yD!8|Gz}]R>Η:s{*Yv-uY(/`TJ!IL!7,=fZM͜[ijH%uhnST{\Od4>cNqI>]G[unrlqsi g16c@:<6i$X ՐR@%!BSXuf\TYOkh^dl#rԌ>$M^mق1K(#utA,ڳZnB BQPK-#o'0L#mTȗBJvlt)K (<4Wn<w<աm#:CIٰhD(QTZuPyU狯+e$Xty;A8֌&e,&A.Cqk 9:uLR Gjm E@\@RZ6f=a5~sBEFpB5 E9V {6s`q]baI踻jmWHiR@F&E03ʄWgwNKYbԨԈTߔκi/.P^j+$_LF2mvYEo[PB]:= +6.ȸALAAXg󭋠@zIB€(!2] IN96uA)[K1 : s<`&S1qh(0PI!0E>dT jåƏY.:2M]{aXie*8(@Hq`Q -ڳ;!JPAw/uz6 `;TW ,u9kh A}ݸ<~d0WZ,Io $$ eFj<+%5K͐Ȳ(FҰI&=jE}b}Ay!:3#ac RUƊYdFŘAQLCa:iB0`߇9;3뫱}˚S~q1(ÛZVt+ccg;.0AQCZI|tx̓K<l:@Gֱ*MT$g_{uT 2G SQDdz R DŽVM-z+aoH-4!L^R41~iAJEIso|vSFN,k-_n|\bZR\o[m.Ao^ݿl}n73Emc=.ss7Wz߿؝iPDٸ濏m۫Zƕ˽l#~WJ:l<ߋms_e^_nwm=4^ut87?>Ua |q oZh Sw 'r[@N@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N[C蔜@dN o.}h }N V@O  $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@::hHp:N d@@'Jo @Qi'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q=]'P$)9"kN N |N Q@O d)+N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'r(ztk}0?>~3V׷~zW@=0qit*ƥ6Oݸ4P ƥ`\Г+ڞ ] d~qmO{ 'ttjl7qjK֞ ] jtb tJxBt5QWnpBWmP"]E}7y3:XP>4v!fW_[Z<,+^bw\:4rΞ+7V?&A+E.[u+>]CC!We0^l:6 :Wj[nfcۮ1:V Qgri>?}1 FI vmc5~>&7vڎ=4/_ ]j#Oi9+4|=xk7s}SZX0ǥ?Fí25M}"Vi*qϊL -~('%(y;ȩj3YUMɮa( (\'$W5MF"76Ler8!ʆ(|DwƵbg{m}-c6fŋiX)!D{^ߙ_]ۛߘębbUTgU(Obl!ľ~;ax|p}8W\w P8w3GG|`.=WZW=|mˋi{~vt IUms #-d׵r̵Vܪ%úuL-ùA܍w`/3~̽? - ?zSFR:n"s}W `ouf`ç"yǥm}ʴC?{g~rw' Da/zl{wfy~(˹eAzin» KW/ˠnvT  my(m;pQ>5aq}F<[F =agf -:xvWQ8d0GҚo'R !mt P&}b bR2x Fkg@8i(rƒ^?^'qz9}?qhNhU✋ɸ)kn*\!8`U\A+k}揃\z2m}DpT^mu A'Eopf Bڇp+i?]`VOX{UZzԼpCle/&7ac=\Ě?;jok횷kk8!mv; go1\ltv;0 y:"+p&!?}نZvD9B-BpV+kxាlzgDsc )) ! /Lz]w2S%BoOW`HUmnyTk.6nfp{HWFxeS¸eՅE6 cX wu댍(>*u5JQї.\kU^;߷6K/ IN<ΜYp02V<׉!9D'IӃ-ArC2U\I\@P2P8>&L"}sD ~Nr. &$mWfV*w=hc-w1Ǣ^`5S!.2#*hXM^⹕2g7p҃Kʣ)RP8TӴQp&Zq6H}2՚1w9SA 0#g;?>N&qۥ>c=C zEYfI}|[Ju5r *PO-CH(* V9Jq(MKzytK*[t˽ft˽h)[  @sԥOI~ E: j(a@T[>.I/^LS2"R*I4IJ@2e,.P눐46|4Z*A]YNlۢ/"WUkT}YouW]eQC$wyƧ<0< jNZR`"NoZ5+f5ESU{[(dtrk[j۬|F'}!7:=TTu/;X5ګRy 8WxYI5!$\{\DDATS#e9۫8q{;\FIQyЖ&Da9;*qlm˔ C[ |vun^^n?DV+:ϗwx-zt׸,)WJB0Ԕ&a}::G4P]:Rʵbk܊q𵵎_,m3wűbۄLVi-6L6T8ŀ4c!"PۍXAc^Pp86(A^YwY܃u\HʥBwZWFifr L0!9,\۵_/5H2dGAH4F$Q"J(=SQ3RYF'ne@D[0g·-l!">7[/H2pr.}"Uwjlb6ʅdPuBl?QBcxyӔ=`OXB.:Ü ~:qК+GGJFpq:h1\M(0z7T0J('k9fҏ>(/VOH}*T Y}3>3,NEErάipj!R_۷j)trr<:^ weŘ4( \WѦk'.pӵQ٘Ч[Զ4IZ~n/|w=hx!bI̥8[nWsf'㟮u7;o!]#q9t5 Fa/1'Oj4i2]|2}ZLl5r8pvQ5j׳FYg42.x$,q|?%s.IdqL`W{SkMgqsyW훿{廳뻳rF9{7gw? cMᨋ+Lϻj0,q=~:|Li[ÕI¡P6F6xDu$%nX?֖7v$PDu$)!P.ב^ذ.U=38pIh&&Q#T٧RHLO1BSL2[{e}>L1K=d=ʝw:{>z/2O%sI/Θ Q|4ȸQsUX w5 IIֹ'?YLZ3:y9&P#!xpGNSAP =HY=RGO%6O[  T#p 9A]t@:%t Er͈S0vnCmML\>En.S1޲hأWBGJ:lŋ $ UbU 0O '{:KQdu7e2fbp=v9 ~yڭ.:Zy!%dW;h4ؠ[CݷE\O#o@8̱]oPCm%Gqk9޵~tc~<w_2>ouPŜ?yüiV$܌ WZ jdռ-Xjm6n=z%Hd NīȜW (W!@Dd򞲊2R>=m/E=r[ R+ÝQIk2Q.ȨGE!KxTOM5yU6MY#* DKpփpB('Vb`uk!ۖ!=ϝ˛a}!Abk-C60 ]1=^}%j|^>n~TFU&!+[޼7ЫWhC>+,EH9`M>υ^Ł98JVe^@F8/ Et+IJ+TY PMs/ct1*r7^P%8-"RՉ%:+Ҟ\-FΎE!a:frȧx2k+T@JFe)&,\Rm~qGb]n{<,O~Ӡrf4|yc Kl!kES HnG| NENRҨUEM!Jlnx8 Ysq&W:(,脺16gJ{ȑ_) ֔x/ݽ35{,z`5.*Ikwc,U#KiVG21 &*iЦan ӌG7b}Y[7ںgނ$T+꘤Q&6{%2gf)Is&<͇S??Ic[1ؗMÌhzFq5TɈs#OƔ9C' Q2/feJlvFҀq J3#υg|$e7%N@ON3h4?18[=̔MsIɾhEbϋ$>`Dc:P‹+1ښl }<⚅),mƤcO>G|4lIG1k2?O҇j/9ޑmwQTq[я~4TU'vȧhy#m3%BOYE΄Sk]>UAZM׫~rOfHa兏c[moiQv4KkS.z<0RFy ;;mtN%/dvZh:(ܢdu6e#2yДFfcltiBA☀R\I]hx&! x.]$]cpCN76 -ۍn*bA%j?<քM> I Z k!34>T^^yQޙΜզO&{2x\+V{oN0n`ߔWmi̷SӲ89:9hKv6R Aylz_yOxn}ƦKMF 7e8Wgl7?~ZsƝOwxtzQ!ST Oҳj[p^^3gS̟ մ+^(p>jq }MCgT럝Λ6Y^g)rvGfl`ZΛjnyG+zf2%Kໟ~L+5d>!7ދJ~Z `f'?zLأM`m'aO6KuObn@Zr߫e1dZ⥷$sY"+שObm-=ί5S.,}@KkR2D'R(", o#Ą:]NC/))5Z/}5X⾁U0V9Ngo[?ϋYfOVemluWtLOHm7d8F"*avx30Uҹ1)P&<`BގɈ[TYcciK2$LVJs**S%Qq|\(BbBHu),/G Le 1p~2^_"7zW/ CiT75!ev&JL(GobT y$ÏR @|h^Fm.>pk5ӱ?Ff ˀ:y -2/%KdP(!M!Bs@ Q yH22eR͚}ŰlM?Ъbφ'%ȍ&向ܺbm.L xƧ >]տlQr067y~Ymd~+9+KF7_oV('Ds^ hgk?W~'5XÏf~>|׀st^SNBж'.*O,ڲ@!Ĝ~_-OuOWUq\ ѧ :1C]oIq^D@\̐`䲈 *3nwSIL*P\v uwm*L3A[5 wٝjMq{1S9J-\ƩFXY{..U &LqqnUeajkeAۻ0'rafܗ{<Ә XsL7|X+ Iiti2g0P YQޥ9/yqN_ .{__JӖŹ=>Og2apUV)hsCD+ZDĘ{"|녁f0{ʡBd4ǔlNd PG#]5^-Oqu\#mv6Qaݤv _ĆrkQV0HdR#=ϐ<I)u }N8CӝR~&0ljgP RkRb(@c !KlN-=&H)єNfύ΁M2 h`y-|o|dB$vNLw0l465 dN,T|zFIUHIm|`&F53dDӦ9tStYFIjʽ 2|Zc@_qm)AN)mPhӕ9QbqVhtc-ұq %/PK-o\^p ];*&LZTR-Ru&RTWR-=RAiYjIdz{miZn'VnpęvC+ij7Zte{ڷ֘ ]!A-V@骠 +.!*5t \#BW-l9]!JKeOW/6;`kxW ZFNWem>CW VCt(M*pEg誠 J[W/$pulm]ݡVUAizztUrj!BJU kCtUКt"J TZX}/M3Q&f䛫w8*lZ,*/b/ wш9YE꒫K P*p|{z+j3wêD?, _T&L8n^tj``9~[2SܲzU`,AWW[mu.TX |zW#|o?~zEpbC2 H!-vv jlRr{EZ@Ye$y ^``2TCIeg[gѽ#bkg2ߍ_Fp|9({2yd2E#xſ~\l%#U0;IN.o֒bka81HO={$|\uO‰b%}qqDbz[甠R 6yTT RkRb(@cÁ; 2&.qGmYFXS(2mߧujʦsC]LO1V,Yzsv{W90H-߮ ؇r ڀr_ @pHFM\im5%vvME 5ݲRAgQgn9g6T@/*!5'9)sCMIP:^ ax?A'QCys>p/Ӭ96\CY 4ɡ"$_4@Oj`+@V^C$GYZYu$YccGWFinj &Hod!;/1?д"LQi "3QQ] o+E1AM-^CYIVξݕ4+qb[ᒇ|<aY&:Os3gO_\a7 L؎r~.I]S7L)Q ׏TD|(02*NAG y" ;$'ZqNy8qU߿=&h'зo:K@x_ԁhڥǭ>Ύt}ɔ{Rcgh8ťH]~m5BggGo\cҜNIث69~ Wrѵ1|u\¨)?k}.ŋkՏ'\M.f]ff;-˱]!GhU6~AHSK\6$MͰf4ilfUY㤍0 G0b^G|eAos6Vljkj 9>WTdqJOh{YWR:ieqzDso{|㿿黏o޽H/o? 8@EI+, a`r f# c\-o{UǷoG>֋*\_,)J6_ X> jaGAU[/_Ppe_|:xD#G\MPyW#‡y.i^P tA[GHR6IMC\Gznan\zvl%;;sQZvB9AvB#Rw;q[>pOEvJkb52Iu@ JuIbPDsE!nO= &WOT:h>ghM'x7˰?}gaROٙKD+K;2}:MB]6ٵ9Ư!8aΊ_CןƪGDɺ1}*.W,&'q溗!yg?NK:2}B?zf#*W4Wt7"=[vU8%-:c'5Tl -q gΙq a;3 ۩fӦ-m aeFh`Xi#%hkXb$f'Q.xc%E¨ (]em 1yN@hʹ%Fǝ{]0N3絶h FFALV%pms,➃ *H:A"X!HPG6YnQ#FKgb4(uAGjy&C'HRp8\xt$%*Y8ꄤɠi%<)AUVve ݓ.&qs}`@4x@3:IpZ`8&Dd,E^"&qI.x5l<9596+>C31BĜt55VKwN~PJ 7J)' ~N;%S"JF5sS]10>H=Aۃ1Ilf iTcz;m'rZD#\z$l{pE2`a،k%xuw ?>|۫Hia j-vthǽP>K#o90RPU8 R;nU#,v\;tBHQ+#x2H2$=$FR'z<{e27(5S;}yE4?@X[NpJF *ZfZgmjl[x9#x>~Oy]'{nVe[j2hKD#]0S b:Bxd(9 W=Y\_K,xz,0]asj9Ӳuz=Sp;6*+o>mP̧K+- P*mAGMUllӣs);T(m.Х8/Lԁ p+R+NE Ѫ 4@ɔόyC-@,yo@Y"1Z(OQ\gWڳ[g@&&Džfrz|ާ3/($,*C'HiA1ń.R{JɜЖ=OA͹CЋq{CW=8QqJdq d|[ 7Az5uyNNjXَ1檽 ϟ'ȬXz. qZˉ78 , ^CR"axj)'HSBuLֳ UQ*ڶ2UZicmj.$}\ed|Y|6>YVa'yJn;zf9TZє,C(0S3I)XѫM!Zlnx8 s=,MtP+5Q'툱 ª M5qv[l?XvkX[ںݦD+$Q^ZQ/wy=lMvsq/{b[ӏZDӲE4E,VkR9UkgKd=8G$sdDPBڤ489-DNWAuГfBsXN'Қ8{|bU&1qicm..vvq'lc]At҇dA)iKIpOwvvXakH{ц=v k&0Ƚ.=mcϦóauXAp]k3bR/&hY+&Rr})U%ީdZb`B.HҰQz9W¡K3v2:Ays?~eVibeZ{)?-ǿ}--Hأ. Qx9si ҆lt69Rt$hMfQ41L>H/4![%АaŖLj!9S1O8K^Z&Zg7I6q<"]z3F|z&a'bT8]paIjc\D"Q4.>=mgWw֛:nC@jIH:c\P{v 7HeF}{6l#4y[ptHĖ^z[&&V%$v_|_^=N^@ -eNj QG!"|h%TIn<ŎKnKq_| CSe`;ĶL<бA\{~,lV+|^I*5ia5MCot *~ 8NxVR#QQ.b>Mt8KF%#>dFBNcE+ V`pEJGIYT K\Ks"IR|M$BFņ1!15fZiSt`kBk+HKS/rWU,M]OΫWUgF^^|!=Y9?iDGx)Mrf:vרNo> @^JJq%CaI4 'D>/E34*)s-( #^j@w Fgz?#zD/m:q5I䆼?鹱|C -beW ٫:U5>a3d |&4N^]]R3pJy.Ԉy_t0^IRlGl? B >aaT_̶ u! (t5.~qDאOgu&DC e(_/)_\\Z GvfoSrt=,WL \D(?@e09[/Z>ԝmZj^g͇CoسN.*Z5dK]UZ?a[̱tq8ՎJe#>uXĒLWmxռ,٥%{ԼRr>G-͟n[y+EUq7bp{+/ڡϋn| 4MB^9y+b5'`w&^XX/a&-1E1RCU|iѷD@έٱa /KTq\\(/ǣ/9|raL0ϣR 2 H%傩_+,}ڽXx?TګKrR/r4 1I(߯1z IQCih,3Lnt7MU 305@T ƹD2BN4UmpXK_=_A70YdY& 0Fr}l=^\yQwxrJ>ި˕l(h$* +AH&~}"H 74}6ݑv% ̂cDKSG)QPcqZT\z< S+Wpcڴ\ ܁/?7:n4ic>AQ!@Oe8*;|{ɻnU¡ axG,y|9>ZkKmsr$*YA0%ep,eQA^|VnȹTģ7c /'x!oG>R6⽇8nzۙ޶:] ͰiN?R647u2T><@Uw~,QK4U&|h'7?UWUQ9E6 |;X7fU&G} ܭdz|(SY7yEN f /#γ$AXOArYm>d-jxRR6) f`$EN`.J>l^,peζHgX>?y=Cm6\{x,8Uu_!v t9G7@b(riBXO AZH`=RR>Þ=:៥ j Fx5ྵj{(_.y:0 "($aF$54LL3ʹBO2 LS\D4A9"OFA! `VS)ρmVhPepA n^o?H%>s#5p ,wׂe،NQʨ_<%V#A %{Ee҈2hNx W7&)+2Ɣ7At#xsDO$&y\RQi`, :rq%{2^n57˭n5js{$EE2&̛_yMr9Y)rFߕ߻ `2Mf'g?\sӆG&V ~3:FS&7n~S_e5՚]J7W8_QKlE 7\YJ>n\x r W.v|HxYKXVM/<,>xcp3p&j>T5{/U{ ho")L 9c̋jv2/mJ`&8}H Y%6QIBQ\g& Mir1ΦPC􏻨=yQMZ>ݺ-A{܍=ݧbrE.PvZ?eIOtx?-O+5iNgoJ=rlӫ PW<㵙Tvk(K0aۺng*MwmU b"7a؎yI]A#zzM^;ݺTVLD^KfLϲpT֊YJi&홴sDP@r0ZqB+Zp7x[:q/z蠎:8g4k͎>FeCDJpJCȤ‹$S^zh v)Mwv+0tRL`&EGfL]F*u]ߢ+r6szWǹvj1w5r^Wa*nW YjC oX)f4J"O_B"GQhPhgQ҄Ĺj-TY۶wşel˝h޶DH>I&= x%8*%!@˭h`TD n[Rm8ۛAx4LH1F"JMкLD :"$40h:?vDYi]u埾۝_Q(W-W*v>]o ٮF[7{UL.cR'S[eBOҪ+#,B5#2Bj9}WA\q%"ĕW\E\ejYk&f*P35 bX\Nz)sfrv2f- ш&:"DGA7y 2Mc,(N21v" #o%E峻.B(h&৿ݸ;gF[\x(t ox~w X~4f[ O)FIRۼ->g3_}h,=Q3W%LP~}w ׸@Q DD.F[Fk2]kTZ6h PkV)rH3*u(*SkxUFkWZsP'^ >ٮ+Z{/Jq+Գܾv(xicqJqԾ㨴_=B\A\}TJ2]pB(<&|@>]t) u1[KbLzݱB&NC}DEK}e`[2u\EY**ɼ!cPQpdP..ܮ]h+[qlW]7z^6fJ!\lpiͣ@޼og뤚%О+|uvlZ7M7d/+#>\e{~osVҡ] qaߗm}=o=AVMwpX~_&Tì4[ e7da;];|JeڧhoFrz-(+輋-asgأE0^Aŷq^o)sJO<TDOeLV hC ;^Gn1]3-f5Eu. ,m}<>hR|[r ]vх?Fn=7ķ{k\+׻m/i=$r?bVZIەcU*YE)Be8~K^xg'*؜r:k2ւK&| b)X]fǃt2#;ivNcR7 n]VkRjDetJD^ %L;csU8OrS'].Aىr!u59<ܹ?1/gfiJ_u=Q6g,m77=M8/?gx<*D4ˆ΅YUG sR*(*g$-*kqO+,VrR-HRq2m\!Z(/p\@hg")"SV}Nz&Bwg-mx5%̼ء~S:]—.l~֢K}Iіd2~͐ D|〮G% =PxCy=cvK}=v>3^$u$*Z9ktΌXJF'RPit.%.R:j?xNalœIqc訯pO7Tœf©,H0NF ,R(,wJ.x;)#\pX01>JPL oBlJ\J^وq+5.0k 8>csq{J?LInNNt-wЈK!*!^i( T2.Nj*ƃA:iH A\6^!uJ\=L.ΰg=7 ‚3sO"zIȪȞ%枨:i zf0 (H>X,Ӄ;r~@D_hB]EԽ5a , ;`⅀Awt+gaI.aO;%S= GҨ4IճOVkoK^4-T:"@]SqO yy},_oXL9\֏J4HO[7[7%Ru8N1F4a|Ә~98mx0* Κ} 7 w]ƤY(QI%;zë:jiz5aT9%w;|!5JS.{ռ|HfxfXKX%ll1sm{WOw ί܌Ի GLƳb9{BP|;qW70x@"K=__Wi/ ?Zd{Fb[`-`ONnbue;A. r$] _LZ>yp (ZWK2}X6g)4lvB+Vɵ׽,N'c0(S4鼑 :q;QG DRҒwR0x3@sn<$K^Cpx)p[Z*T*:NkW@ǗUn<{UݦGBA5g,=>Z{8uQ cE +o9tB뗯 /&C^n6z5ISg>1x1Y%7ޣa% z$ e}jgCcM:%.]+7 "ճT X"UɞW BO?сԔK4YRI*F.*.dZXw~,<嵬YLS C2@"3R[J1)B Q. ˔e<ȐMKkZ{bZ| E'[<DDš$D:alWAe&6`9/US BH4X#TQ8ͤCh(ĺ ˈ>H;)<ɂS`4XQ $(::ArP2cS]MΌ&0Pq %jBKYF938E"3˞%}a'4-/BHKM"BHړhcIQn}"zTQYrUnm"@ݚzu]bct[VSBƞpS,fϿՕMO|m.mp3dVO1QOFqG@3L]SktQy6@RK1$FO[`֥J m'f|ZK㫺7g$NcmL?~>.BDkvW3_ͅ޼q`5ѾV6BhWL`Q|[3Gr.tO.* e VԌ̆OY~=~Ye.0V;Oq Ofg+.G: -pGibm%[WRzJmˈe,k]fY~KxRɬi'AOo6mUۼ|m n:-N4&ar eRum#>̼;I0!lxxË3B?;:_'} _ޞwtN*Anl9Xs33Vw_?}7ƿP|WJ]wcxOgNT|yĭ8JSvf)NLr,˝_v~#ٮKKvX.wݠ..ܲוvkĻ‘.H|Py2 XQQFhkpUGjWGBiEg[Smy!@Zb&a<بwnq+"orx) @<yK%;,0H79v"K*IW,lYV$#=ںH/=A.æފ9VDW]u(FiGU \k)\\nLT{s5f8,s:M5[8O租~ROi|~Bݴ,0Jj]exU{gn͏``RZ}ΞG7RN d6 d]Rfc%Ѥ\:YHP֪V-;NWϕDAFT( w6!6O9iCǼѓF|gBd\UPMu9aYcAÀOf'zj8v9fTռ!wbG9s|#Wޑ 9ǵ4]^w$NyLONF~N;3df5ļ}*Mi mzE ׅkCX90R&k虎MC+&j>h`1M*8 fIi2CRo l:pG5f14@ѦUmkֱ&ƍKCJfap9lMe7t߮6pw$@1Fq/Xb2A"_@s{d*.{V3*[c.CjaQN\C29kH%Fv}sq68__e6V3u!B\].ɽ*={oI7bn!^^^~y'U]WL碭ZX};ҕՍp] J(Z,|,p?uOGKvUydQ$Gr{l<ZGrB-0,+O銁g\p{(g>] sJ(qxZ#]6 o\JhmzB ѕ}dW³wIO=7 iu5Ni1] -zmБ_;8^tŴ\JW]-PWw+u+5] -`ʃ'֪ʂ1SɗO}`I`[S^{WK3t+ѕ:EWBKu] ekKW])AFW B͏]1Vnu@AcՙsV_)`Q߻f};H @wiuM o^:=jzAaoÇ9?ƻz78]'Ouo/5Dص1pR ǻX=?L?+Zҭ??2aޑq=M3 " .^t%ZוPD]T tӕntŸNw3%P: YJ!h}ͻӀ'u9iDS([^t-z`;ҕFWs~RUW ԕ`HW Ϩ+] s;t] UW ԕ%w+ЍB/`ZSRk+kV],ٞƮ8~Ʈ@hIbZMZ6=JѕR7i{0e㪫 [nn{BmW(kB}K&ܷ xnn:Cˤ@сGYko־J5+K;&2iXKK>9(c` HNpqrZr]4Hyi+5ޕPjz]10u+EӋ] eЫ+[Jpѕj]WL kjrQkqFWLKF+ V]-PWNƮXi+u]1esV]:\M!k}\W!pWq^2)M>)|䮓͍y*2ͤt7&^TZzzL3rv[Ly7ƆUUkm<5)~4(xZ8P\#Frީ`mG`gEWBu] eXɗ,@Ob`\7\MJhESBypuE,zz?}IGO%O?yt5Rv]Ѫc^v+6JpzѕкҪj2h6JO>v5 ] m)UW ԕE$)d`o}7\Jhi]WLN#^u]wAGRnt%+mJ(quê#]10~Au+5Ժ`pBl Zݙ1ZhEp2L%hG.t-*$!6p$`rSzMka"']1p8yƃi/:H$] o~T((W]-PWkV13Ѝt/Zl~TF(]u@]&Իb`aҕnj]WBV]}?,zJy26\W>4\sSѺ%F-] 򫮎-z(Ց|7S]M%l]WBԪCv+FFWK6@bJpuee쪧`-nt%+%ۺү"tӧEϑ}x]nwٹ5gk~8RM]XW#fx7]xU |:ey˕?n~WE\%5kOcB_^u(%o?_zw1]71uRox9]pe_Hh"8oͭ/8T~ۼW|Njy;{k7{_c~_+/W|]={-v/ݜ߸*~(뷭y0Tݿ⠾jg*.X$)|>cYqX~s|R>P@(ϟoFyŽݿ5Wc0A-C.x=~Wj`l?߾y %ՏZ|-lJE'zDWGGޓ_ ,}ޔpuSx'ۯv |\P7U.|F=*tT Y[騜Q$@sP\GωI1ŪiuU1+~gŔ5YC%1Ǿ9}9}i 1gRU !dzztD\YFM*iJA#s@̽VIKR|:l#FEj[5`cp )?ZoC0"oz]R΍9TkmȲיq% X,,5&ɗz\ӤWlg{IQ3G[])YLuL-IFRKwPd06D6c6m@Uk%ёLYTZN1 늳GpGFK$3RZ&,CB됔2v 0iןd!4)lR!TT2m)ާw)W 4fѱ]ˡH΢dW{! Y v^Dxޮbכ,1KFK[CB'dHXlOĹL.Vkqs2gUNȪs#s( F[4hZ)*!w!Fil R"wUS5}GYGW(}fE6@Z備3XaS8 Vhr IkSm'MŁ M$ԳŬs֞Dqˌ>$%UVش+!Kb&E d0kJ!B QoTTdC@JRP/q*`!H9gjFb[Q`Q n

Z6!i4֙pG~rRFIuQBu"wj$%0!`fE/cg!.z>˚C|웊K-xdMے11- H"5ZI|tx K8 >o9@G~U2z(9Pmi>jZ.]Us-EMpE A-)JP$RDM+ʫ2fC2*) +]h^{*ċ@C%AYZ{[{ǣfP܆`m@]K UfeSU_ }ygD%j7OS,9tŸ́$ȝLVhӭYaǟOGUD|ҷK*aK4+Qw`tvhC*r]/oՒnA׀R"2[Ez(%g&b%ssǂ@AK^/B>J۝ :iMD0j!ik 4^ !:2%YصF!uf I,ds P  7Q%o CYiI6cB:*gD(!cȡhlͥV +Y{Y;A5Vec'޲H@)BM~뭈 K*6,e ԿYn%,Bj/Vp6U Ɛ)^6M0{_y>M7:oC^0ަ.7zqhC.nn@3 58)QD`]QdPf1h֘U}q%JV ]"Ș43rhҌТ E~  @9)1j=#F ޖEgr9$Qz e % )kz] 6B?+BJ1$A'W W#ooPg]F SCTE# HMb$djbb,Tu`\GҩH*8QNڙZg$h;k֪V%0쥪Iu2 BhaR5gm} pStrgBנ|+M%ćPcf1*"wU| 6Pڀ@t…Y'X)*9 X&] -L9Ezʨ}!1?MnD*St%bٗR#I p@h JC`Ys6VFRz`eSFuLf5ؤQ)g|$Ct.H,=wQ@Ǖ1[4eySʏH]KF@QR o`j?z߯j-nlv0LC0p}\_N_!N4Ṱ 5i[V|~9?8R-69_-s\5l^/Mzv/hcX-w>;jw?1/6j{={\*L< 1/+\쾕bۏ[mO/ۯ{->N_nKha_n.hn1ai dNl~L3/BV'T@=Zt'PHv}N ϙd'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vRz@J9tO手=Z2_QgD2?W,#NxW>_@7o1|&mv3r0޺dkUJh+mB#U ׉#YYg#n{BZw">buVwGߧLԮW(ۛIЯN:CDϷ5?m\Qj.{3]| `x{]zcTm_2gg(Ϥ~1 Se'dW Aq*ve~@'b;'ܺ@Lꭰi*/(_.QUdc_<-JèX dv任:s:y?[l9_mڳHC [m[Ue8/jѓFQl8M nXd۪vra~f{Cٮ0)'u`+8)Z9F՝e銉Q]^TL[غzܺ=Ppwf!,eQû5z~j:p@+-wχxg>dΣA?Գ|O󅮘o@{Xy75hl{ئXoS_ns{sy`5m gc@jgTŻ ,-ڷ0K"oyt* 7}!,(_XL2]47z2PJqr!'FxZ&€~\n#MҋMPL.j=y=!iPy@ײE[@1X=Qr&28$<2rZ{]K @x%yg]\z9eh;L.0Ґ]ΚݣvSCKSG)cVZ&ģDsqxhُ'& <؋tsWhtYzKc9I|[Kˮp~*2xp1 :װ!ۉ褔:aJ(ENu߉ TVuz-Jkq? \R;|}PɚUFp 4W(WFZbd\L tYRW(r.1A~pcNX&RF6[LJ[*c嘨vX%)!j%Ȩ; 0~ E~\~` m 5 g *ՑC#v,m}"-3\aS]ڴקMkv wZ;pzq9Vڵm΄!i0ol$G]L!he$}ҿnjOAtR&p$h[Nt($t21^hBZV_ tL?dDfNZs&) (7J!߂vڮB8?\T|Gk|h83z(3Nxg08o\Z?[S!Kl Ky% 'Bue*Қ\$ڠiŃRy;mGyktwTidO$3-@{5ls&Rq.Qkr/yM-mdux9\ a#ceffHNwy8'RŻ,Uo0b/"KPpH&zg~0$HNl @3Ovql8Wd:xx02¶Z?EpytOدyQ3,"3ÍU*m [)sqC =<:њvʄ6ġ-*ku gg'IQAq@\Lٯ[q&5щޭ~uYUvQSӬ4gyB-oݳ\b2t8 *P_%DРl!,o as#T[(lߒK"/Ƿb;%׊o^,S2"R*I4IJ@2e,.P눐47|ugp.pʼ7w&괹xC{T\}a=o|sh';*%6ݔwSJ Ք]z),%c]) ;b Y7Y^ ь#Ñm#ɱN f /#γ$ףEGx5ݾ5ݾ55ݾ5]S+Óʸ) :`"sI8 v[(i.zdJGE3sr@E.Jwuɿumq"yjgԲ1oX[M-N! a=-<Yh"v=?([_["GFkxbɾ1hJy[bRd\B䙜$4g$șRa1 !X"SxbZ.' 9]}پl؇1o+"($aF$F&\' .qV_8y.ow ;QDPH$<"T!:Fsh+c28 Ix4F|g[NLk~%"p{@Mn{ok%7KK7xo,76<4HCI7@p(L@-# QpcR1.cLy O;gI$sAK҃E(XY}h"K&. $ i`Fwzub8g^뙷#H"WJ L-ju2ꉂĄ%@m2>GF ַXT႔RuhѠ6CCz(j۳bxFI{&m%!#F+TUE hK+AA㱃Z"ka01JV~NQozUmw7cM~n6258;e Nf@OQ[l-CW =Xje]S)o:R$n RU͠M+bj\ojJr.zBD )+(.ICM'G/< |gM/2v(Kꂌpt37B.kη&,Pcx* DKpփpB('nsH._lMjyY3wH urbowa]\.)ٽK`8vtxLxq~qڅeD_|y-hU pAᯉR C͠tN_s{ɐH2/t3sܜjpgBQFh"HF!(@u\A1"Բ$)XB?jYL2KäHYΤ(4ޠe 10D&FA[-90Қ^.m$P#?o5l../FoQK^^Zeq)*!c KMI>6 BGp&BKQGJVRMȀK1|ejbɱB MD0]T2Ve49itϝ^ՄH7.%&ׂ[Y܃u\HʥBWZWFifyGpae \72Qf[-M(LƑ bqs8!ej8b:JAyϢxFyH*1#hE(`$ޞ9B3J.J{lj^ef&#vrh&gb|ąVd#9v}/UC 9!ǖxSjԢ~uﯿJoKq\ H R ۛ=?<612;WԠ#2QB9aWru甇+W8E}mQ$:K@H.|}RM*.> 8nVKhVyr*}S/7qZT폋VO#|n\cWVxEM]wmI_l+{}A356/Tbb>4E8mN$6*-]߮aG7Ҳ`ߘ/4v1:௵ٓgӏΛ`4D,\Oau1q6ތm9R GW:o9iT%.W$-]jnF4MDOQOYv+GËϋܴ9L뜶+[eV\ꪾFM_'O5 K_yFUb4sc_:. deqWA:z7_^{󿜿8LSpUc`f ,ɸZ?=IKj_8otv*9l]Of"_bX(a; W֞Jt[MC{k4Mi:6ݿn d#%y&o+|*]Ɵwl;ףɯ?Mi7T>2_TP䍜%i F8[J]Nzl;aT/ODxv Kζ xA^@Uxvy tך :+^>ZO~-XLi1^AYiŐyuey'HcT؛ povH9¼f3puLʉ7J&J (6~墍)MUfOkD%C7Y'pa+Gq^Y`^qRU!Vpũ R*xB`vO%tJ`)Yᑈ7l,yo`V$F+ʫH]+Y~G;f.HQ39<= 29I#PjFTib ]+R{J+/8x_MTm1S+!pk"ьKn1)۽-8/a\+}?{+nd*>*>bKp:\ypʅ6 8 TW,D{E?N*GaX r. qZ'o|9< #w q1x\ؠ ъVeȍ0wIJUiuJ2670 hqɕJx%f6 l;blńMª*mZf<~4 +ݚtlںe=k}O"JebKk+xŨ>. 5ƭ.VJ*7 iLx-iHP,1YœXH`=og}ڙ[0F 5cW5ؖMˌhzF^kRxI檵3åOƠ=8GA8{2%JU]o5I Nm4`8 ycȖh,14=%̈́NS@ci;Z58kD}L0xqRW|Iɶh[EbϋI|働X GS Ac5@0- J1MC^T i=/‡Iǖ|،h@a˨X>O#Yc.;Gk((5`6я#Lf(5V XN=t0Fl Z ]ڃ- =]!]1** l+@+):]!tu};9wrYrJ {(ĊK+Z Lied쬘euA=<ƘJ3Q^Dz JYiM.2ֲ’R4A9kh?>?[CF0M`Hia9¼+kQ='{={?uƣf:'l<-^puiR\aV) .ȋFE #Q\}_\>31A(1\E~.襘|Z-/?4 ][no>v.k BdYhrM)7?iqwɳ"'ajNl[_~@~Ly*FS K'[ӛԅN^G%)gQs*LzmͺH3a?Q}$͇1_ Z!R*AEuK- u>]EGyH/D&㲿IM~Z}YJI, 0'eɝ&Y6|Ii3˽Vîu!V%#ֺ*EN3̕3.gYVI`2QQ"[z-,gPjr Z܊lTW0 u?TOX~*fʭS`ܿ?G'L9zz5NЉ=% Ż@*3F͝γWn5C~2Md\=f) 8y ]\ןurEw]SlL: m9a-0KVi~k2tWOQo`c~iyTHz "ِ֮rWRUx4y|oq4<9˓޹p0"вP)t^>$*ZkJ-6T&P@Sa6'dCWWg]!Zy+)l(EǮVsFeP4t kzIk7l.[MXf4^ZEv;L83V]H58EDdmEvbޖVAW3}K+O!ZOD{K-9äȊ'Jbs+D:(U'?FSeP ]!\ ]ZJ:(Y]};tלz~J8( `Nġ+luX 81gzF=U=Twu `dٗAh;-|SEBGV"9`EUfm4\^uV\[}>KcdY~־0| V>g*9`LxT2iㅪ>Jv Y\TbOfG4|ZY6*"KH_|b[ox['4ɑUPQ Rpfij{ɔ':D\B3ɸƆ"WD+^Q_R]+)iKgb0rEJ~S~W{$W[\1 FVjjϮ~2;ޜ j`5\VfDp3ŴeQJُt1oprERS0fEZvbJGe(W^S2 b`NvŸB+֯(bJՏ#WvG3էϮ*'j>9 r5Jl^\^uʷ`kj:\IũFٶ^#W`gWve,G`/a<߬/9|>˖O3 ɖ#qD,cHB",5Z5 c8l1GsuFFʷ0 il6_O-~kc乡盆>\ w1׻*O-ӹ0y)|nYrQ ҡJ̽R(NܨZrùz)LOAKn$nT$\xFE󹝿ǫ4{Z$o0WOM8vE$ҧ+k tߨ7>.XxEA`2,߃uixH-t\̯݇1!}*;>4]38Nz|ӗ˿:ޚ2mzkJx,AlĒ`xeWUU9nި:acC@l fqQ2η}(GG?Uɇ'Z2 Jz{xSj#6KUuqZT.U[3 Ҫ&Cf% [4[&@Y o[;G͞-B(Ò2M8DkI[?AĔ9>EN lte$NGk#.-ο($1p ъbd %? N2]qRPX||DP= SLHGVm T;Z}&C811Sm^#W90O:Ҟy(L[ī`@3SSM|5-W7tmz=J?Ft+z_2g-L?^g6'ub}."uAm,>γTNҎ͆)=l6KXF# yQbT%)GA diWG*-ȣ(uNffBF*96.Jq1'uSb% ICEִ~1Sb?SEFI_%\%'gZm+T?\Ꭾ3aU5p-HQmOe Wվ:T{e+EڶS:媃rF$WFPih\)^t!{\iv0J\1m+ԮX $Wl#W\(rŴMOU_a/W+cyǂL8rŸzbJ]+ko4}_h0p%iky[!0׾M(qU܅(m՛PVHCD`Ƶ " ڞ70%>o`ހhІ$W|Ї+=^hi\/c\uG:B"`kU0rŸ 2bGe\yÀ䊀 g6:\1 Sʾqzw&}U5`j{m_jD뵫QMѻ rzyׂb7\1.#Wm+l[vQJy'EHrh+նEVC)媃rBZ\0B8rŸև"WL}(IU{\V䊁 FbZz"J'E/W+++ ET{ TP{'劷=ER^h}JoױۖlFwm?ѣaVv|5Qw$W9ࡒm0jetY-{|X #`T>Jq!qrEJ)}?JIm%$W d0rŸޅ"WDrŔU'JA\B`\+Mɕi\%^:(W^)?e~qqMӻE~\K6QV H_=M(gQ2ab6x:`Hhqi;_>bk7 rRdY!zv2,EP q}HNY,WjB;o;,w9v:p8@'*>\$%WnJ-8{%a[r)PQ>' c*NGK"r <{U _&vkϊ||.V˛17_b6[LwzݙQߎWQ6cN*Z}kOF<[ lR|d2P^aBxe޲-nWjVgo) ?(ݎr lKAJJGBGYM>JѶ?SJwV^3.B+ N].E(6ҽ҆&ಐ/߸>(NS*@*ep]r#7"$Wh7y~Gy*;ΓE*UƜ܃Rhݠ) ʡ]U1xtNJQn䬒kz]6,o)e(sJB-؆İֲO)eJ0*{a&<'rƊC7kjWJlx}ߋe*!"|wA-^E Ϙ|`} ^nV@V,jb՛ ;8ݿ.yȡm|?UQ5z~Fհ)/Y|fE(w,/$I)fD}HVkl o᦭7ex>u%d -xz5Zxτlrk(8Edޜg +cV:Nk$faY#E>qi)d ZV9@Ƽj: 6GL]ekjmDӭa'6A !ǢMݜkhm$ު*R;^{u9a% dz^*)bw8"o3O2j D>Ր%0MhMcjR}ܧQB)×ZnA|sk[Kv1F)iq+{UÌ/8q ޯ~\B%=u4BsDL}}-mrkkXaZcVi4WO2GՂaFQBxT[+Oy:ևz$YC}Xu2nޏ\wr;_8_N ^yp "pV:^7ʹTpq}N $c Ppa׃@5K~<٩*G̀8%}}=ʘa[97'NHeJvS)ͶޔYVV~䍂-)6fNUM\}X+^l~,F9|Mln 슞]׀(5 &\C/S$B'OKT^etc.ײiaɂG]wsr u8WŢAc|{ja4sL|հFC<-(>Z?|%\*NZTZ.K[C>(wmXSaX,$QJ9qv*8cɦ̋dك+&y~J}hCŸ7${K;1aN tpJp4.s*I3fIurq"-qTaM EaGk t*ϥ i)iKvXqVoYB "J%·r IqwXO<·\.FASLe"ڶ϶wSF+VGVZ˘\%1 @~וm+i>Z NDW3'u7eO.J/׎d*REˮ^+9  hlhIO*^AT&&(;t1T/ᴔy|YS+_D~VTcA%c{p"3܃'vj%AD*=pk94!aLp@Q^Ph^‹IR&اpFf:"P`GS8x%\=Ni u~5:FBϯD2#k/1'=׿zXZ>F_~"S5BSb*HYc~> `X+$d9ekӴuuQ5&7z</l鎕}_#eQ1)e☴I}9u:8>+C!qr5ֽDn?!bs1b ax^02#0kJPiKX=#cl"u~rL??'vщ;X4T5u=$Tf=TJ6|BbvҒ:)M^#6^V,nk/tJA9[hŰx㛀L ѣB8Y}q){4>4'3k "ǝ$"y,NJ#8e)e5 Jlg*Gp6bv&uLnjI/!U:u`6kSpة`-jlTחGb߱4U|YJ m$}):^-n w6gy=6OV8d~v0umrIذ~˻zSבvr=!iFSi` :GM5y:/W9G=i]^~N:Rx9BQf'6LJU> Kk m4o5'DJFNߥQ9~ x]Ne],R㣹ƋcKt)BSg0.b'˻Gt$(&ܨ]hT{L'_SSKTv玌-ޥ;jf|CB5R1%RA#|ѣ1ώ#o^l  J4<(|/5mWV4?4'a??eOs{X( M]fˇ& po5#kB[%Iʶ+]m4EŽ:v1LK:TѮ?>(j_juwt1ͫ",d'Fؒ@IT{P*UMKpvsJGNsJ)n~S\no`]KacIJGZTvA5hR5fĉ7C\K:R|N 4{ag |)D>Sh-z'=t%yKzT8J8ZtWl^Sy/+um|8/vȇ(,§޿{$ |xC8y*?y$YIp~U@ۛKMTbs\z>A>)!F Bō^4)=gWMЕ,hG <~ܤ{PP( %!yXN&U,R4ûoԐ m./+x "M>I|TJ[}h 6(!sIrS$0IPTxZT/YVPe^Ro^@5"Bx:lN9Bd\`X~d=\$x5JD:ydz+wY%[Zns8y,I"N+EDB:$<d=!S4]GZd$ sU8 QuAbu׉~ Nb0+CาvPis/̤Dg$ec߷4.R:3`zd*?14;f_+3JԘ i9"K 9V2|sm5ҀeVН޼ 0d|D 'cyƯRPR#FuhP/ZLa\ D4 ,t lj%%̊ eR^ZiJFu-ChB谥{; PP''@:*#NKT&Q0DZKDҪPR40F7;FtVuvv<5SC,:(4@D,u*U,w G]]B1Q;vक़3:\\ rc'>6xѬ +]u@JZ)aV#YWLT)IӺNw8h R[qzl{bƟGKfoRڝ|P럷JBF]v,l4bnYB&7WX^":ﮰ>zFOm3mH2,]6 1Xs"֑b0N1`W"a`7#u6w"pŧ^+%"|o7kIT5$[<(vjԱ{9ٷOYPX٢{XܣKSlCcZݣ0N u1u4T{LSE^ 5=ԬOIU(k,O*>2&6{oiG_fT /m\o]gh7VyBuQob> {:I;Tp)3e;)o^Д\PSA[,3J]TuhVxYd`DItt=o$IփQozD\:w1"}Y͊upS>βM& tAQAգթSMז5g$-]IDl`S%ܿ<=]{L #bcVG/X&~rW-Tli_f'0gYOrgv+J4 pLu"ݭ)۬H )Jeuz~{\Po OL>v<_b|y'_wºQtV'rκ(PE~xZ_kQi{V{[ mb9-Pq pU\y;k&F! H׷}*aN-zd]> ;IFYRvνfùq>0 R9㿇gˇ&|/էXK+Y1Sףkir&-pPe'}`@h`(S{B؅o-L26T=Lmu1Zw_~OiԶb۽Ss'-91\PhN"zK>"άS[}y35%p}l5:Oڞ?םnJҵgA$9 ]ҤH*:?!w΁xZ3-~p -T2x̵S"qJ~qca:G2y\Xq!tuTiH4{Xg ?}X|ZE=QGcQ>?{F_kQ|wfatlY,R'v~l=֣nrL٬*YoV&c"Y^=1S~Rc22*`I$a\fQ2k׉{w` _ &e|זqCK_{pqqy/:tہb76/b/FTvu#^ ʴ_7 քP du~U槤m j^헃y M[/ XK~@1!N8K1#S.9emϑ_54[[._mWyiYT~>|9EA3r/dLFE ޓR8C2_4xQەT]: s܋.ܯLPK~1fY$0W+kb%z=}5j3QDVk( (%%.M7e糀Y[05XnF7AVjY'|{C5˜(-xhrޡ> iɁE=[q-TLK(]IMP +.靲bl nG-hif]RS9 UR"G_[gqSM.{SZ6 m(h`Ͼ7h5jC󀑇Z{_:|4z{aw_stUo@5:=qqja(bP9ns7S@" ӮO i qdoȸ( \#^1z؉Sң: cL'IV -\3:eZE\PB/_>qN"~n(\ -xfF;ջ?} c  $7L1Xo_y27<7`L +H`ԥ5^CJ#* pb $qwZ`nmjBae"I$Nm%I=Db$*$6bkAH&BVj)U=E1&(8K%,ƜZT?󤤣sf2Ћ}S #eSVtTwmjif`;5Sr|D4`uv&*u)M]i8y5F.G~$e)ߒ*c&5ՠ=Oˤ-x7ȣCo<{`2_z/ fc7pϿt~{sSs,>+_|&7ͧq#-Yfbb?O fn9*> Hyv?AqE1:ChɊi5U2$NI2bn"!QkΎE;@ ňEh:J廹[ r+bs;z;9%D9d( gi;a\`K"pUP;??yww#\E|ŋ> 罤V[ޑCޔ7|4Y΀u"׆`oBSb~xySw *G ^@l:ʂ袸+@W\Y>cx PcW#b_UԄC0@$o6t7*kU oIkcNseK):>`dl 0q_Gs-"ׅ}HP_*uq3*J .*3u">1;^_dS(#VՀ,YBw> cb`-!&m XuM$ߪ4XcouF?5I2+`iJ݆ MaC E}C%?x CGsQ0J'&8+"|YωV$2 \vZ.7Q)k:!j[k "eI)ꂀ4[ˎE,Vz]?:K,'F?ewRIITTWͤ+ia\0:]8A y JHb'{Ӯ0^ڍ4MFZ@ӼZ^`TC ƜȜy|SᰳI!1~äH[M!%kK*iRsY] U63}x:k0c3HXRO<;VOQ57y^nq'>~P|\b\x/ wݜ0FEicEOd1IF>f䬁H)>4AA@Kc3$)7Xbi*`wڭO*!2LfeRpU_281̹\Z" !`2͍IEbT * $Dj} K\D+*\P" WE\|{qN`)$ G 1*`v#G&oo-#ޚ!gj*`d?> /K:02^k>jڻbDD{/ongVR۳ox%HSk2}.C +'B,82FH+A%RϠWKQk%+3aw V`M4f) ͼdOH.N)^\˨{ycf[VWfS$SGw,A0^f-.Mg<= C|ՀtxHVO#X55j(rg\7Tc7a]@@Wa%L;v*i##J?Ɋdjr@%a.EBKŁ6(*18`g3ϳl0 :nP@_$4m/8>y7?/`zÕ> ?3Yp}6(-~3w>+4H:EQsKhlalnz}*ԀVc6 0/YAՁU+K$*zyg>)%Au(VW P>q :C\ Vr!fУ4 ^,g6AXp33R"I4\ycۯp^xo)~%G[%'5P9]Ǥб4y} *14E)S97 gVx эI%ThQzyy uxǦPouj!NbAp*@6U. M]'yvΝbk<plgt AMfP()UJRkQ*V} 8/Th. PH{>GVRk\otnp`ĠJ/ \,j?5r00$Vi2jMeb;œT)3'*K4QP@APkknEBL܁TCOS5ٗ l,*s)@QlJ~ćspG3jNǪ/*/8RaXY1#E"T,։ i$ ULBR&*U\PQ73ț΍Hy1rpk*%X(\, E%4K-R ]qS!T& xW HǏ*BdиUfDO6vCxVW`ڈ߸yYiv=t@4INǒٴB[Jg" \R`RءEBvUyW^|_|ŗ|E^5q%4e1\a.u8i,$&( jt$ f\%8$XKJ[g[EXFsy}:ϯ7{z\WN]k>;տi?qe6/kc5D73mD`|:F3Ñ6Q^uK1.|ㅐ 7{ Bpr:{1o{^^)h7 C0JT`[:?fzȷFA&Of˔#2(Y(c'd*XxI)z0n8jLh'l{n8}=`z@Gqp0nW *8\XK,Ip:6Ui[xm i6lAb 鎔w8IF7َ׃"u,ҍ{PY|- =D !܆ekί9i6Z4 V/ł`\s(:97xN8EErĉj1FV|v*}˾q~FRw#ls`߽V?;?}~g(Bz }Pg7gX|o~"^ޘƄK{?E4sGpz_1~˧yfXs:M×=:AzsEMM>19}o;;vIn?Nq[shvPQ$D/*Djvp86(4Vc, xKx5)|/>ʛ)]Mx5m055d4%7~, ;/ 6kSrI\^]VZ6/FGkKpq-=ŽhIx;ǃfxV2S/?\ZG7R!T;$0#vJ#bu0#6ފ)0|#%#컯(̅O?FoZa[ cV;,g2܂Uy'բwz*cQ_ 8Ä"AN8:Fo7fy8\(cM͗gۛ}[ }:Dh/{g.7I3 g6];iC"lBiDKRR$`奈r z0G:؞GwCߚB;3?X#4H90G%fxKև/i!^9˳rŽK3a5n:W.+y#_cf6 $yߒ:Zr q!KXqM&f=<:SɆ\%4.ZYC:iRO"pShk(@ѭՐu Eov?#'UJ)S?'F%Q8'K^a]-g\_1r;Lxr󛽴kH~p)"I'1A JÏSJ픥R1m:F!\tnBhׂ x#e*уf.)WPxd[E2C@+ȥ\IE=d5n3[j*PYPJ#?J$ǜGhkGvke/0O<*U;@lO zphSğ# qJbځx7őq̴֠q-!x"L"NRW&CW*!v`-wos3e>>|a]H bKyX01*I6%k †XrfXr^k>H~6OudU\ >!$U9*dGyC75o8ڬ]j@hT) UQ足JyM͆4 XA5죕S k p\sZ<ܦGlC>*] %ߪ0iIPRHBK!Q֏xYsEijx042I ~$*VY;5 vVJhZ ,npv z;Kw[)ڢQ w&@o~]Rc!")@, J;*0-mr lUd5\'̨ͦ|Fِ ݊OPO_FjT5>_*Gqat`[GN3sc{D3mef+pyO}z>T=ɸ?jcxZ1/gv$~sbƥ8屎kcAkDZbD2gG\b)2CD2m,sieܓ|+BrmV |)-u;m6cMD\I#sMP(Ims#gڙ F0e"m~{r`36{Aѯ3Hu)y1 ml>,ntY]AJ:%Lی A5Yp(FAfEieCX4%/y_ ZcW\B)eEh\-q*0n2O xd>c zXp%k-zF|0blAݙJh\l3&Mhѹ);qSբP"^QrDj{SN@8c_铨铘)KGap#s USfcFs#Qf·saYR-9٢e4zpCb^9Ip;98111: Jf m,[`kå1\ f6Xe~SN0L7RTBu>U3qDd˕ q/X!WBcI~ h}#>.* W c( y,N]1p"˳,;qQc #hR/dmb A @d5(-y--b x8A^2y6dWQDS |^7 '/gJ7-(VD iAE;h\@]є5PbrJ ťc\qѶ͠Tܢ |U=B ɺSOht^BijPnA#TRݱ~R?(ndYA]!zxZk?NRyb,QQ9&N@ *3Tʕ|Y#9[iC E`|-&W6\Gc@|6$LSbQZTySؙknhp;E߾ JPy 7Q.κ˭-y<g{}_^[Þ<7;4`\+!#'FZHxjm&mXDVuS_aD0 $PPo_ݟp1~g(z+hHc'gӾztkp`@F/oL3N?iS1Ex8xM/X{G0ޯx??rSO?)G['Ʉ)>$W8HÚz;/i6pR\;p>fO>Xgz#؎ޡW1]:7sSoo@\.mq \}g*:m#?Ux-y}{ ` |ҔMrM;&ԩk8Q-"}b 5ATf2pӘGI|_שq?!~saz@{3}kΕMLz?~?ْ zUܓWas|nk:5Xt@20y4|%L*B%C-9LGHc$F4 :@Y*DzԳ*b-]k+WC4Xu65z-P] 3X9֨yD;7ӵ|pl%jm1%dV`ѽ.z_KKA֘x-j) jƽV2@Mi2#LؘZx)^SR Z1$C-Ͼt^t]t` w! حm]%+m^ SɆ]%4.*D6jDTC,C6š%QOJا*%.m#p|[TBcx0 cGB}r0:^ <1RDČI/0#,(s1U =kulBoEЮl+~C8;yJR&#8y"7d^^N97pħ1;H'Cm{SD(^ׁʪx dorU%HԖj=RA{([B^X}~;4W vIš%w$Va.rmQgO*`l-/Ѕf(~6͜P2eVH ?}!$nN zH '^TRap>Bőq֠w%OII`Bʈ?E= sfc'h\c,CciRܔwb6Yx|PNrN@i*I7~;t5/#LYkpv& C\%4ޘK6[q=X$vk,ӯ0A>M4ު *ʆ\Q(W~{wS7fjRD[Nݓlsƥ8Xc-& ']p0^MapJchsn[BWMTk]Xq%p׵Yx4t /p cٻEA.% = H*pJb+ .kz"4ؿ]׊1(3\ݏtq`u|~о_K?8>p%F迟 8[nrAS84hͪgh6vDH&o56e^!2(#ʞ(h_U{ 3)}cUپИFܟ5(fnWZEťS۷6v!x-j=DIv]1o\~g C%-X` ۱0vi%GXJxt /'E{n xwncޞ.dcUػ;6{&p7o (/vuwQJ[l09eȰ{:fCOzLmg|RRWҒnw`hg[9nZ0sv5(ko cluWPGYt64oƶjhj^ocljZ9c[v|c_M;Dm: K:s[] AZ^7' Еʄ2B6B]{ eVh0-)͵ |&@RB2H 0!tQPY3;;_hq0NjY̡v\xtS8|V'!_+ | m rOV1]4hDSu DG*auѵ1߷l!N:|-;*)9VY|s-C.HBh"`R(X{FadTD|br ,ӪhqymqnNE+i:}V쳧X[~\Bޥ_|p[8&l/_?4/(XZ)2XPby>1h] P~b$c@Pgbjj |tr.2Mb!?߲=+@5uE[!<'-TA@x+Cݕu"$ڧK5Y(qQG(,༰l{ÄCVXNgE2)N:KC+Hrk}0 E$aǓJJ|}f9yoVOһܦ[Al8{v4 8TLmȨanc7 6V9V8/Z]50ǓoPe66}/ ibRuĖ3Pʼ nt#y~m5eŦ|)XWs}˥ E~ZI9U΢RAɴcAd !|D$*]3u4-u̖up& ;zRŚcFOq)K/sv&LC&[A" T sFS3SеQW1_ :U"<%Ř&TU;1hEV\]ؔuMn:وֱщ!qV0`1I33Eh^҆ yHCr )g|R TCi *: 51HpɩDKE͆zjЪ9^Ve\sA9?/ ?]d 1ҬXц H4Stĉ+@ D_yJ_PI \(d2I:S2)BO)H:X,,nDDL'l7uZו^AVVd3DXa׳Vtp6`ZJŚӗ+z9$%Eι6#D.NrZX AEl)R,ԫpSDǬbӃ{7>lIԦˆ9QeK;ftGɻag|WYgiz}+7 |Pŏ札["Rʖd+v +L2,J=anHVgkQ.Āb|HRY" 2RǁNBxl;٘4Nd,@LVH$؂}d-vHQYTcL,EK<>Rք:Y ʰMDFVk܃vWM M ypu}d OTTՇ= vYҎvu{~65 LUύ<W# rlDG0%?khrm|x?Ʃ;eH73@^_dh]AШu.'*dW6PY$rΠIAPN2Q(d4AAf Xa&)LNIxMUzBїGQ-{!,x]eyzX5#*\j۰QUi\%pZ _d>dϋB2GϬbJBF.0A׎B}km#BKzj`|δ:TTr*k3ZJk Iho DRd#4v-gL[+)@ v%f=n -c `6"Gc:ۘvt'F;٧1H됧(6݁|VF.i #0BJ9THU9.LH5s\Z:5n 7#f\bZswAF 11ViL@aGq nxaM*j XF Q#!ȁC=,amk7j ۅ{*cWQoi;(Wuv97km$_ ܅ՂEŴ$ZރgVhC 5H}eƫū &^K/0\)7bG_m9:*h$e.@xYкC,i#ZCeEhG1H`Tui${r.:-)%sd oiKp'J{Vzy*t4f~/*U7ER?U6[5zn7r ~b֊f]~?zv! )fys479,dbZ,wUGY?_k}+z';H6-y[C?:YFkX`ٟ{J2pZMC?Kߜ, 1?_^p.v8iJEvL l?|b){˧#joִsq 'VrS7.ŵYˇ=x=?fE<܏+ V|l :a3<~\M'>hk)J/ /\풗DbUk*5xݝ%l=M5mr/kbf|@%H/kZ_)1>5۳āы8Q߲z"ބhb,x U/ʋ iGJv!4@VfTKGt3I*MG;Q$iaF&RZ8J8)FnnE7i랦^'ɨgZq_QLA8bN;s9e:|ڲGKݍ eTu-[-2 |6Gޤ`/a Q|bur2G_OFQy 댼w_N//Կ}MMDnӳ}^wkFǫM|-%O=B>Gza nְk~1/։ C,!// r+/H3yz"iG~)w2IۥNćXe9Z~%ho˔ 6VeUtw1 êF[zbݻwoVtz]N5q(PڍГRIN :ɇMT`DD "HhHTL%a"DQt#C'>r8?ZJ*eTl`kJB9XONA`sͥܭ^,jңRfŷ1Ez:u?st~%c|l^EPqΎ.˧O_WĜbM"Qfh ;)yLG 2۪[_u8Dz7o.[^Pepn t:n.P.0z4L>d;Kk|=Bj|=Ss.fוjD;2,y5WS9NSݔTy[\Cg̱z&U贋֭!SpNY7PtMA;W"ެyzDUQ-("w(kNS6uSCZm'߼@VBYpxN;Fi^WvPr-1WhaGFcx/N~窌Y_ =H8Q![O\]fũ)TZ)PTT [rUO̹EjdcmlmGW/c4Z^{4`˒VE2UGhqϪsQ[<qxž>tQiN~ҲFIԏy-$E.(,:9\p>GEP8ÇC%HƁw(2]DsMݜd.ZHzR{nPYs5ƽY;e^5쉏?$4:az"n)aRwhir:cAyus ڨ$X3NZ^YeރI@C NOA4bReS(-2|Rz7 +EQ׬(Yf<38ϩu>/HMz%{]#HUO9{L!z`[#qIZ xG8~7Ś3}+в01J1ld]F l8jĭ[( L1nb2X*ADI Hd;es" c) sv1Qs搎X%UXEfoγ<7*OviTF%e=7*Gv ls?-WJ&h^ L5!\ zL r88`ЈǬIZ)E=F_uj1:%AɎa{+GA78vݫ0Yq{B&J. ZR`BT"@9 hM(o)WR1O{@? Cp-!. o.Z*|{ h}.}y8 ?) U4pJ޼ybF$NY;)~s=t_jA04-E{j ŨlVb)YqGn%O'+dqVcʇ[}  fs&RZ@F nI|.V.ĸKNVbi_ o%(8) d$n܍ SJQLȏp1?g7`E2.e@=|Co'Hŝdvs\QrSq39XI|ދkPޢ2ĉk*Xl9Xz&KO}D8xft#RRB\QJrndE.}0A,6[wRw%g/6RŪ÷ Ed~W2E+p\Yyt(e$dW$ɍTi!Zs|nLۥ~=N4Jҗ^:R{v ^ KGэoV~w?+cO$L˓2_? {mub>;/1bcu 𝰽?}}]CPasQHb hc1Ɇ>p)dRHJL}GLoisdQn`*B} oL O;kT垽wvT" 1fNԛVk/8%L`tTwdC86W}>_BFĊس)CPng?id:b8 "%Q'_+Ken90ՈgVMw*]V.֋wZ0D80tjQWڼ^S]\J4% -6SX-t~%w/3yD\ݧwl01:H`аwJ()"B+-5׹5h݆{ So@N,?]r*vsueztTKWN` dX+@UE !4Vrib.)30DDR&.hiH'I2m#r]{ҏ֥y~r䜮6EHQA일,z&n5&P9A+`2K7 fu?c,1A'o!G{J.HN* t:5Kؤs &6X~"D_c:ίCT*ZS8 X_kz# GwlёYWyPy2;[k>Qz%WIռwo jwV>P1hGTjm]9W7";ܲ6gҁ՜G eqֶ楄,+( MI\q [in@s-y{޾m AX'i7bKzAo@-%y'azԖ0gJ,LR-!y`r=Q N-0\Mz"vNRgBHXZkPK66MRoPy ZiLsVn[XIDi:$6H’Ki9*%X^j8b[jja؜[(:=j&VM}WGdɬlme< *=ɊNJlyY^~XfhCo9ڲvdM%,ܪ z ;:>)hfRB, B ȑG?\9&ƣVrX&|Mfʍ@ݓ)wsO)egvQS r^Xs/&,)"e_O (RME^Oj1#{\%OGbM )Nxf,R=H!.B7:}ՐT,1<'7xTg7ڮ"!r8 Yn>hMJ!>k̵d i@4^1Z|l}!n%;MM0ṷ|u-kvꇥ*˿VUb[۔2}v83yψ#t_VT/ mLl{n{x9ߓgtsOrKQg=:[;~$ȵ[Z/!R0s8s=GǬ#E܃chq6s?x޵Gڿ2߼WeYyWl 0s~C.t0L.*U,%ZVnp?YnXIgv\K6̶OQ~߈"uEk:7 *؈HtvrƓ˹ٰD *n4|`9r.z(3=EGG]gZjv@q{֍w=+r0_}$X/ΟmKjol1|_Dr+ʟW?E7Ü`맿o.7;\eUj@%"OIXy,?WoV gVgu'@:-x;VWpxIw]m٢^~\~ɻymf_}TyqR3sId G.y&~w71%uz(5Gw"ʿ~qeè6҅7q@VG#P;~^-Mw5lP^iJ:;XJHjmAv~rj|Z,w'u:[(sTkd.cl+ְXR8v^>CcL>{=xi_7zlW;.1>m|m߷<>r׹*j#BsMʛSC{ᡳNg9~ypc=WLתLog+h !+L69YA3sαN: Xqj=[!ZIa %%AK YR9%k͒zeth+tڝwoI(hYPpṻ3BNfL'9LCb(@~fhh S}f7ggY]g',Nm} aghrYbip{~r[Qfsi{{4:AX 7tr ܚ Nޫ`ˢRY@K=`.>,&jˌ=<#XZb@0LPIM8"X.9є3XyGk}=쾅IRb zxྵCJ,=ol1j9 jQ݄Jbܸ6S>X &x^Z"`" OnG#mY@6̥Im\1? 3 Psؗao(pO!}3 MStN`ç/fqXRxH'Z5Ϧ D}jwܽ E OP:l󉴳e:]nMVK)ΈєdYQ?*<.PQ C nhKP~A)9YŘg4K&k$uϲϑ֥0`b&jϺ.?ȁ%O&glBT NEl|窳3]h.>N W,P[~v6VyJ[~ zy Kt 䑌'&أhԦ/1l vfQ̮XFPآSݸg77Nq >S5V_,cV9ۖyG9ە an5x-PpӨ;gms z)v3Hm).cA!a)0R};(.30Gy 6J՞o.m\ qMCkB\6Xvt"J`VQqK,qnŠ]q{=\4La qjNTCkKPF]ZޓMB <3D4OਯK|D!'̈́L,^72h(IZ9v׷݀v3W-_ī72g9'ߤ1mA Mw fl4TglԵ^cڒ`3-J̯nQ-1,<]q{FzӐd)Y=ǰKEVp=:p4MXtXdҠ34 6 *Ra RYGW.䌀-_ٕ2T'n۩ϮWpf{]#䢟M6L!MS3 ٮ6`׍wKa,jhW6>k{iFMWh!/,㳵,*f-z%0j̰R39`)1 HZD}{ٷJh,)7|.+ L9[.1H)JbWZlQ3jn3lؑSJqu鳶.XLF5 [Y0sBTHuNR\[dhodpBt8W7+@c%{]? ݐ&_љucCJ08+YN&˸y#@(,IN=5 jjɆMnćTy@;%keFa); {o4x;oiF)=͊yۥY~:(-A8h0G^#|kq^޷ODž)}|nķX8 PǞ=nO[4ߑO<#%2CԸ$Fl)]K|c!ϝ 1-kN͘QCx;Cthzrǜs<}}x5x[sewC28^w0E@,#IERADJe;M=WI-fPm GWIm+YA̤ 44޸`.Tc]Y!BăXטּ{K=mc?ta, ̴jAxIeA}SN _`YȧޝW;qK`HgR(@cP9@0N{t`$Bt~} +?Ȭ`NatT ~/oc!~q1Klo̤17~?.gRCq캉?.umȇ\J(-q6ĭRٯY*8D>УަUkT|T󇷃9n|4I|LjڙH&ϓS0VoW^2#Z] nסf,!9N!B-iYQzK \:<6|R+^nk*3GӰa"yG־CY$Sp)bC9\t73e])wLKwwt!DߪdglݍoNxxÐ;`LyU3,_TnZ,?(C;w'~3ڠ@R1֩zKI0,i܇&c dg@:igh(5ih]:c8B[Bf^@t.o4 ?}_!\7yx^lp,;qZ <~>zWݔ12 S>ƯxJ\C _%`R ă}֐dKCw-%m4 0y݁e6 ,F.x8A X2ɽF5Rm%bR'upQ  իbg[@G*|bav8X!C gvȍ @jHf`bU4j7ؒKUY5U]=}`zt qN':l놬wϗ`aF+mݕN%&ipk wpmz#>`Gy% .Gm4)EWi\=*I>Dxҩ7~/LTPylhQ +O%B(2bѨXkBG)U_,S'iKfPBϤRlP4c[R:' `3EHRc.Y[\@^,;\ Ґ14S3EޢG '%~QFH0%E)IcFX!)dtCM5ؚ:.ņ¬t %$ͧ%sbȨJ<GGMhfēXW?5_ټS04{%G^|*T v%9c<N9ևYȱ0,Ulsk Ü9gOam<~=kMa Vrnx n { 83f-4w%FsM|kmm{'n~rug!sR^~kÔ\~oF|Io7A w\9TYy}*쁼=A{}ͳpljb=07 vKȋc/V䕋QtlY$q"WГߘql&L3U"^SW\BXFAhvLcs٣!lq x\0xq;G0'8$+M(YN`ā]Q% x>5:[xe۴ZxY-Oѱr6>vaGBl8X Rfz?"+'/2x?݄v]_@socv>LhجFC]7g8<݄v+ 6Mh4 #2$h'ƣ5 <%Lh7!ק _|y_YRxsy%qC{ĵ.oE%&8$ ΂j]recQ@CIZ z[j] }dLC%rR9! +=7jDCrwF]mC93K_$lYb!_uYb $cʑB:PY1Jfd*"633(D䚱Z}B<+ŠC@9:L ۇ~!J"%y_HSCi(p \ؑ9./y[>Y%Fٶ@MJW3XQ5YB\JNa[MIy:4JՖSY6MÛڋ޲0IJ`lD` BJQu9\\H0e/G[hƾ4`;c 43$б@Y.я8:3 L@< :-k.OcW} CO7˻/t]>X0Sy1F"fYmZ܈WoԷD8='XeL'99]j>ht!ߞJ),mş C|66+V)FT̎X *!eȌ/1_3,E(r,#4(z\v٢HZx ۺg`!IM`=^D([?o=ch'3XťSNNTT.A2'8Ż#zcl\bwsh3qjpM 9xViϳ{6N=9I<9g,>L|56k.b>i:|aۿh6#2{HȖ7#>c[%skU/Kܷr|Btb cIY95ڲ3@Z@QExά!g׳ ̾>p5'42sx`W6xQO̧2۴=Չ:R=c"z>|7yo~#YQq.nÒfԁ㡞0e)ɋ^2r\>?ſxTQDhh\:b6 vw Y ޛ;?پgxG'86]&`3|mPEH(fx9/Z}ۧ׾v w78 *Zid||K{wt@nf^.ca#hGB˾qnψnCԻ9@.ꐍl9PT6ϊ[DnV+gV;ggIs4W8pC!~!d*A_ Ʉڒk #2R莥/m.LϘcś߶]}2 ː].˯ۼT@{HJ*1⓲&x7 3& #7[i P+ XCdUs*EllI|û:O !7u, |䷋И7pݯ!6ٹFvCęݒ"-+g;f޷9c8i(ulh/lB#εŕK -/) ,E 8Ur!Foa(vź֚j9ռ񟖚1s\HG@zUW?gk5m\L%lȀXpk8Xr:TXC^s[H7!`߯y> ^G248Ux=džx[gL0~ϑ="ͭO|/%r!(~[ͭs^7#{ p%l<ࠨWZPr*p=P8f}|!SuD6vͺU,ǰK+@Wά7 an}A4}|H||޷O.MВ3eҾ{~F<z}ͳhN@Ao,vF]TP=r+{qdzdh3D_CE{c4*Quփwž[r垁n?ż ~&5x<5f"q{ YŅ@+g YD J$ |yiF|U 33<\_ӄvp ʄv?9c#JWF<ݚ 0vlRwA0N Lh7~A=hg')er1S &[o!C;F$b]pG sMdw!xGo݅do{LZڸT=Rp +Uy *Z鿱Nu GYGc:^"T|&*-Y zlcVW\thMw5N2Wu#0Sp0I*p 1y>y^H%B?$I|1TI '1<ިe@ѹKsRZw"ۊ F7ߢ!Z}}ͳtYJ6=}U[#7Ab^cANC P&݋™#3'J>jás6 fhnŝ#hE9ch_TY3Q|Cya׃+ޥݙ~ξ:Wjd6--QȝaCѢBޥRD,Baܫ}fn]S,z|vf[<5n%p1NԈ;e4{±ʽw^՛ M7<=„Cx%2ʩ k+ޒ5[ص1s # o{Upl/^s3Sq3ǰԹu%_.˒ś˫=Ƒ{4[]ĽU sq}ͳr 힌0`dAԠycɕ{r@o55(3:o2q<2l&kd`xT3td= VwFLc]?࿟'oe;O;6lz/(Ki>_X]b kI >S$CxY{S]n}\^55{cbuPC_s{"L:]mR{ ҳ3:7z+x`DI__?tZ^+F`7ef6>/A灟@LAn C0/'yL~sg|*7W~gOu/|W\z!~-PS-Hz_ <9)1rK7s?_~iä8'~:F{6d /o!Ǜ] 6lx/ЧňREJCf8(ŸMI04.3ȉBy*NԠ؋]U-Ւ%6bf\@~%[8z(Z2KAC3f8祁XY QXT+}L6'4[ie]abkJӆ "}HmƷ8¡>+9n-IFd_{Hlɾ`Nv6-P]<7t@"ǔrPE>S"b n,V]gu zMXZ!_,Ba wh_| Z(Z2KATspvm7Ge|(ל"ᡱg Bޯ D/PH*ޅ6BbufL=kİP|v jӤJ(!pƇۼy^&^TObREpwM^};O|ˇ Tz.aWOf|:ӒVO&[ JeزH[;?%=F~^ezS=r0ဋ-:_Sp)ōqD@9 ;VX!%t :O.)ݽe:'CD'DFT^#.$Wd6{H]J6W ]]}\]y-qHr3%07jv6[2Sʜ nj̳uN-ՌAZ4:Zi[C$]7 dQou ż<`yb{e*BJZ k%\-y;լ)n59F=,@} 5P>n^FO|a_:+$'.Zb}V; X>W%ZoFA>Lsk{PQ\0W `Z oT#)@E+ L4|tZӵRJ',IF$휰wyTJWkBZM:F?T-^* In8H^:4&p0ϋ7Ӂn4ÌzpVhDH[quNϞV@7A:ͦ륳$Y  .”d7ڵ]y+oTc6^?rZ7>*Q^ܯ}Cw@q% jX2SALb$\2әz(6R;0`vǙrF1RN;m%ԂIJH"ٍWU[%$˧ʵ_xgD9O_=_γSN'B w^;0ޭ=~,-/IGx7IQz=sAy TsS}]k%+| x E/$WTﮯ|G .մоYo~ >G,f#k%d:$~l,FPI=`k ٻ֊/Mn2;1ne] $Sb_RO) Hp4 pL1c8V_-ty Чӟ}Lq՛aj[!l cA+D?,[6(&2pNz~+;Vks@+~6;x tE%1X.zɽ@.F8=MSIRɅ"kgU>zަ+.<_E$uK=I&5&J.H.//*CYnoxz2M@¸qYR~&0 A30pSP, $#,Pe*H139#TLcAHBԸrf25^n-εԛbquV)]O8y8yW5R0G smWR^~42 :ו*CHG-XԖk`4*"HX~5X#ƀؐ%+149|hnõz~;NQ$<԰޴j`wrtW!\Y?n@Dw~w]^!Ъrx<-1n8~ՇoqI oΨڏ`m 4ㆂD(M ۺc>qnL=*EsU aLlȅ(b9_gsfץ{h+E2٧T׉)O`Q7w ٦O`ǣ+Tc-^u`>.~̾wk[+0ofSgV^ ݬnȓ+܀{1]+WI2px ~X4*4]W m۶ H`;KO>?`T+8N G;9ىhnx&GC_uMK_ሥ~&f kx mvBnƧ暙#xMgfvnR7і跻ѭOIu7i\r8t7E`(+>{<;X3<3̠Ȍe8n| &ی#-ԘA  Lt_Jh{}t7o>)Fjd-q~i/ф0I&)ZJ`ʌ hZ2ǴDlW) 3ͳYrv!AuSZYФbkr4#=C.Z,t%6b/` U\fa |čo[-_s~sgS\:$m0l(p^ޮI9Y&,(2k͐4\lfƺ\;/jntzVN]lޱ JjpO]V63 bgiw|:Ts;J 7wJPuR6NaIgp4Fh9>xΧv^sRz^I^w7"=A|@yk j#?93i9wkI(lӎ1Å$_b3R f F*;^=|-,c3m_6Jf;9W2Eю3/=HSAr-7Kl"-(%0wHDȑѩ-r zF[7\֙`lts6y-s R]NG~U_C%s>Ri=][,v-H @t-VWʃ.Jk%'lu|RSkl--XU~'{p[Qr(^qxe?$H7CXK;Z͉fn紼O#l;7jCaNso\&#D̢Xr04`)@**P@;,1 \PS hOE gێEkz&`܆Yj4ˬ^1 >e8A`.Ii9k1>a=:ٰjYfà\O Z}z)Ms T`*=6B3i8L&2҂q_[h #T/9k$rohZ+Hnmq:oDFKg0pB߉Ty$B\ kPTK 8ӧ.\N,e[Z]I[?-ΦHPP@-vE*ϦlS4^jD"J 7F+FC&s3ol<XͲ&.OtA>bf( IhԈ!ix!`L(i]guM@P~yKm.+A!_.۬c9dK VݪE+!F){}djq eRQMݛhma6Sb]k6+|-F/!hfAھŢ IMxlW }%_d/e{<hƒe|Εvbo ]2,U!AxZ5M(y:L[Bx+L:>} ި,lCj&aBe 1R =diy"_ fْu9-G 4E[aR ,R@H0CTk:2CT/pxq}MӐ{A|A(O q~FO5ĐhMPsNTa>"JOQa%GQSImD1%\! ] 2fh RL\vaOܒHj6b0#fCg_^' BiXxКhrdKOjQ Chz]2Ir~4{!{@6VfPNb=fqTlF<Ԑ,5TGSr=iA⊓gi,M* H{! {|ڮh͚E[_i]׸YYbar^\5tk-i ij&gH,N L0ƑG$ń*g r WF*8UaX52]FȀ@Z \&mCBWVԔpV9~x n8^*D5+J3$V47SLQ$ Hb .5&Bip2X A X_0" aSTW((¢b3pÞ{M OuHq[g䊢fz3AIO%6 ɽ(ď1"ـ"E&\XY"PJ48d [X*3*)Q>N)00FHEǏ/"ClC>drU=Zt#r(bkőO P#!5,-m$ょuj ީJ}*b"p&v f(R>on`'"`Д$l[y,`^Ffa.Bbc3"hP0IhM"M0lzS( ߺRF-T|'ajWO7Z0A7zE yhJd%l 7a%,hd W ц?v/CoK".Z "(qn_nvU/[3V?wO6GCSngӼlOlo"[iF_Hi1%*˘Rr *.K4 OCƒv<,_S vҀ> -,.<yV0.'ǭn?_} N6GbOl\:yu GrX},2 )etvBG@UV@3rjA +-  nU3_M3@_)e+/x0&dB=oGonGʭƦ) RÖZ:{I 7V/U?>Ec {GWlPF޽ǚBys[vln6Fj=on0Sm8a f2P >Hav9&:; Q[Et3ҍd024Ä8( v/2uehϷP=>UÑ6Rz|ƍ9x&L2ۊLJʉJ9Gt2Y~crq\]a&XQB ]yivZX{;}Ip%|7 L({MZJьz>^b D0s!eIBN"(/MDfMI2~ 贍FFWn< _]w:3ֆ\ Q= }'&vP»'6l2l(UiV>ss9rdI>PqR& F"xk<+uvo3+ؑHH[3Fj*\~e4!HjPPÔhy g1KE7I$78tx@N*Cb1'չ9ㄲb'9C¿e58z}j %뗞ڧ]`9j}H: M^AVl#|&ؼ ..)5hqq 4KI*m1c K)2\/0wA>dx-k{[.za[Vք,du[D]>ŴnrPy-h"X11Q*4\,$w6y^[:1RI:hnw=蜬f#c"K%&G4VROAPMlz# Qm=8 2Ql|z`jƧH tPpbTNҙt<Y^duۚa=\ KB?hƣ[\db.ڈ#ÏRf<"B5jg>Lj_V2WocOCxeiŒ0;nL>3I+P3IzxSrM)ѳqoOC3J@{={[ >^.0L\rkЏRwbuZ3I*>zI\5dOǤSal#O8љKzg)̒Y(f“ qEif|JVRkbzQZQb%3̐N P=(i& fe}vi RcH$\M6pWYK}\"*j+'h.vQ9xS*4>!b[\;E5>v ЉҖPJ$aӉq$M0s1 D1oޥQC&bT"JAhDՇz$T tR(pVJT`>=)2u*$K$yx8rE4%b 󆷆{,s2r7Ǣt|jy ,a$jqZFRUR}T »> w fS'"F(2Yurމfimf6JC9n`}>G--=-#yɈՋئ) I(r%V)bׄicI6sc̲d{YqD>m׎oF;D;wӎIR hh̛6{kU6V:yGIձihGU[lB%B;!K jjl9vי85p/ J`;";}՘X*F<= * ?llfPIt`:pQ*3}j=-5!|JT'Q  IvqQ-xݾ指`rd%1ޥ'; !ڸV+ .<$[dXqc dH3{q)Ntx`ׂKJOj#4y47)*Ӽ}[Ե Eak9<vmbXxY{ӷ>Ԃt~dOxw3?zQ(,.e>8/(jܻfx4ܚ{wc=kc*!֒Do P~w\Iqfc&hnlqή#ltUGW\>oF?& }ȳ2ƨ5?rm.٦ӚZկ>&p;j<. tAk?u>O@П!:,t(oW;7?x;< U}ƻr/^vw~Q~;a!]A`0|WPcP#T;z^xWrWݰxiA) >}A>vW}e pg_'P ?)DK! +(VG0x_/7pqYa.Л hcF(Rc_N+F6e >,.̾/Ȕr$Ks<ђ$ڧ[,Ќ;k水` 5G(hm13%]w8~ן,35p3m1/jG/b1]?/V͘ٻHnWHZX`?*:؅v>ānW \\N:0H]OTXuHw7be Ys wCaZ)K]Y_+Hoϡ`7H c,rI} :?H}H@R?~wAf؍`S"lNp<7 P_O24,0G*yPyexM\4$a*pix,"q5ҧ=DFbXSM'&ɉmV.i]k5}2K:3=E>GmQe^q)2ؐ,VҐM0 iLƖe#0J $!'8TlS|p?*ʹLd֔x3 ^)s.9'drψsy1ʲ FS)'T1OBA5pܹGHr81xTGU9)Q Q|* a$H5`e,S9CInS%`O+u x^%*"*}iK2ЕѕCN@WDN΂>>|Q 'FI ,>ӌˬp(Ja}j|E5DsW  >4 | "fULE1 _*3b"/?.[͔ej*LO⦈!2x-eZ%$.8S Ȉ:ߣfKBiIMOk@#͕K l4 'E%%+j,n>%.PF+m(gXuDA3MITqIMΕ'HЗaV4G뽍~Iu,t [H <siL+XNJxǪ|3q3!rsk|iv ߤ4Js{,Z峍mH'g&>o~А@or, `![J6JR+]w>7 ]exwc~+k% amXM"Q?]KQD}Yiϳ fݢAZoIgv# "} G-H\[8BБBH`)k9qN7ܾUe& #!l:u.X?ڿ0ą0)4OȁW={Fs{#ΧKbC9RRV\2g$ĠZaӼ\VLwF`bdiKZ f Lar++E*yNXJ^5I|xҍJpT|w#L)HʴЅ'OBщɓJ , -)5rb3 `%漯 ȕ%1N'Op[ЉҭT.ʦ{3-fERIXc%WDDs%|v@1/(ȏ\~uZߓ!%pJD0CJD픈bl_tqm(?GX#ȫMy7'Z2U .g !?џQ?}{6*Ǔl]DV_pھLgߎW--\glf *x՟wZ"#]ogj l~)ԹC%ڱ+3yKد> #>l(]w!&Z{$?TTWJ(ek:D·&֠H%B%V" o.XTz3}p]l((@|74zFqlñddM$ӇyP}yn~[B;zM_S;J(ѥ](ѥ]QpϠn4|Z{&#[T3í;%HsDnhYqoIMƯ-ۊ##| Y+ظ>~n?,'| 2*κS;:rdzo{k Qo&p5N!-3ojQr3ZDe{H݅7|V5 ]XUCep!Ia+itfKCȃ!8VY$R 0$=?#U!"}DB*$T%N"KD]S$M̒V \0s"20W"^?AqjTݓHV׬D9#1?Ud򧪎#IISuO+"J)N5q8@Cs$1 qИ5>:kXWTn\&LYV&igQߟwg ,zћWb1O bq0PQ10v-x^V.{6y.:foAyeAѢ4`et;,Qn?D8d0@lu` cіxK;nYPyun0Xtk'wJye>c1܏(\ң#|W!h)Z/~>z -qUM,d:],yٵQly?^Biw4hl7zϿʃvkO-:goUgIͿ?â~}9gv]'Ѵ<4! ^܍(Fڢn<5{'(J=9 Ҫy~Z Џ7fN̛3{$ˎQ~SDs-uH1]N|jæ(8P?\fi?\FRùE5`(N`< ldx{>'$h7ITG NCţq#0cpad{4@bMAp&C ߃#o}AC_sLv ?GdjʱjUڙHvG %IݤnNTF{Z,QS-jMaI%ֱ6ⲡ@}g߉␑ɐ~i0s>9d7AՀ -J@:V}/΁?|!*oi0s6jsw߀~ _d\~&wI_g {~v]LJ3 3feh3" a]Ec:o+HCSЬTݞ#W!ǝu5ܼ['.AE/:5)VGUUg'&]tg?溟b #so+9`eDu?T'-J9SZ m)=5@9wc($d#5HN-S$+lJ儚+V儓T#Za5D?<5HG]t+Tupr*hдҢmS82S5i y&-dqK?JcxW/І-ߍƀGe*%)@?@'nmxu 6B#l[Geu4!X}Hf,T}Cz{F׵B&s|X}YCI7ZBOzSXUx+|2v[vFL.qpm, 7$փGt:,&0Ӳ;̆Xu[m7$6S!39b/Clc1b1⢍,#kUZj`Ӧ(&#wiU^ q6e||_mW[A p H2`9Cس}i'aԆ umX$֒wXqXX Avq$R8$m!ղypH*jB`C8Y6a]ζ}a)֝6dDs8^,j ː0HL : FAPlU#iw/87yc6pxnnǠqV1 Kat+j|A $?^/5+|"MH/vF*J t,YpinRRxSJRK1dpZGZUjtwٴ*p;£JxT=`ͣSҫ30qޕ7dRAlc(f+{ݶ{V"OB=}Y@9%j E;/0ߛ1vјGs ~T ) k 0&;Njfmgh'띸zJIGkMUKǠZ[I}7OK~4٪Z=ݗG4s0~djԓõKިsN=2A?w7}g&;2h+IْYSmnV+@VҊbcM`X=N+"7K*IF5rxoB=9oHI r,$F8J='E,$ R F`'%-tPN}BrB#2`&J>Zv2&ڠ(4?vSJ}tUTp;+$~*#me&1^#"I 5103¶ɊF'Ԃ\Sw6}@ĤV@F&guL9n!ºh5fkgt@ܾ i}/?PET00tsDNWKa`RVqnD斳309v*) R{ƻO&|[o^L8Lav\J ^qŒ@z*aQCeKtR7l@Fə2oF5*r5GCbֱq)24(c\AinANmޫ$  !3*p,TMh0GsE*Ŏ>`̏{YZM'fVw ʽ,ˁMB돗 ! epY-XI\>-3E$ӚE[ "+5AK%A/:u!(p@<<2eF0MvF;)򹈒AgtbC4h+.qnIMːQzi9C^ y΄XAF+](o@F DF`>xBuȉQҳ-Hùf\{Ja  ,XKhPQf"&qoA -攅`t2,V4D)@4EqH61J(EZc 5­7-ѐ+*N?wF/=8e"rΔAu9&R<.uX2$P$\+ [Pe,V! Bk d[l8(b%%GH@$)FPmʍh|c+;SQ*ŀBx1(- I԰MH93I+'U|T|RzZu;\oquܻ%=1ǥf/Ҝۯ=c!Iagم h)3qY !3Lp8xXh4(ڭV[ځjO{d[0F+AP ;Ůl8i31lU{=E0Geth=Xjak #XRGSBzW_1PK.}wzǯfߗQnބ;=> w,^Y42^0z3/RDieJyDXGJ)@HEFT$ޓl&ҴDm<|-Qhp[_Ltdٯ^C/Ed>^AU2gӹsU.8sz|2ŜgArš $2BS?ltF/Da u'WFƧKcˁ٧EI9xu4D N wN>~sн<I ʐM z =>=|JJa`Fm0>R ^ѮL H_>>M< ]% ( F(y0zTttXޞ~q÷LE,>z_{~)hT֝2iF#;|a^N$S:>h7o~9RVI?XavUmyOT9JJ\\Q֏xb5K˩e3G3i&OVZflaO_N/Ph[ݕ>˚P|;ܰ3lε) 1%_M,edU&eA| k{ΤJzL~ZރCoĸoZEutE ;B4PYCja51IZ׻aQ*dE@]24P/P}WvwKO~5;PQ莄c $35̭J| l3x~BmFcvh>[3֬Vh RV5_Y=X>+Q`z% J,ޕvOǿ-(Lֺ7R}+kKϚe]_]W8ŀJ$8-0 ?džl !NicX/0ewI2ݩL͢zǚg⵾46jYqy}d ǻ{{{{׭Ѭ52ʆեnZ8ZVo@3=r=k<+PډY5XJ%v,lCaяM9я5|<"锁&sj1A| EY` vZl0i։mxQg\*׿b0-0㠋RDKYֹ0@C&p-Pd{g^2'nd:-N+.IhSlf:nX8v ԓF9Q}*at  ^!oy|'Yoo/OaKf|QI(j&uY#@UϯQ o:mܨQFշp5ϓd/=b9N"k96E}j b{(7)gzVCJ W1`7{bATQ("]͔E?]"0X>&\o'*:^2#HCN3 "$X)K A2 v1 BqM'^Jk,*Vb !6`{ D8XD =fQ"0gn;"6 9F{=Ip޳t?@ZkRH,s1!wH7mV`?(:\;`ƌL`)c<qgojMZK-T*7(^;pTyy!bR%8oP JLi񹠪L% W raDcH9?PX&J!eSCR[!;Ӎo&HSXHMP1s#OuTL5;njYES l;|DilI5OBX,FXn-ݫ 6pԃ7C*8 upAOm;CQq0i(+' F%(2xl YB EZ9cK;i5 2 5xB25-0*"ek)4icBc.Q:Q`ݮ P;wMAq (N=REQ4'-تRT2jy,kjbpe좢!}hhpuk5^IqJcQPT-hd;e0Mߕ V|]d\B3j%TVPL+X϶2z3t+0q3[(J Xp(Y0("2%3Kq،>Zй90QQG ;FcDa`.C)q48(/ؙr!A% ʃAN::n= pC BРչjYDC@)m5v R*ߤȲ (QѾ ]m(PS,Օ26tɸG'e] %E޷Tξgt0ByA\pA!/gN>Eke+E')M|D dDAvi =Vc][ }FФD:S}@|\뮭A1#.M5quhN7cB*[0eP;B %g8ͰgYtX"X@1"JW]:Te ,#<PugvGKAný 5FQPZ;usk.&ijc\XԅQd zYs0=Br4% !1&P"gpXJrjs!$gO _?]8Zvn!s0ֺ'ZwUgֺZ(wܱ[šTrkj'.i1Xj~̿ظ\jܠ9XUMШۜ;CXPZBQN7}7\|ɲśF h8wzjք4wz2/lOm|.iwˆB, I+v"nQ5ҝ#yol~lxT[n=1E7[a^lc Y\ LDCbtkd2˕gn` 0~9P?- De2oP8m9pr8G ˽&1\6_L1ܲ(9&֪Vƍ֛݅ZJ&ûI( <ɆVAw!7@g8hTdx▷gsDADg:x0tt yDYGtH4:v[zZ}p@o!w|`&[dAdi&9t앏ck  $C pl'b|%F~}xw!F70r6ʹi.p^fQt@VY }5Q|೹"0Eb91\ $ѧOI6/Q/ jqKnxRkb.cnZJ5`VvBv<bY(KoE`~x̷+j(91Mm(e(djh*dٻPOH]EPKA_q{ؽ;aB}C/Пsakd$<$=ltS!/D_,lC4P._kYRŅB +#Y*Ų0@xӝDu lFGm8C9Qi%#UDc3 no޾Λ cv1yPƹv%\{|sysY̝`:ǸZd9yt'lq@c\1u>PV6~bku.l>c4L6yƞi9O]v<GW3߸ƛ\}ϏW3y=9M7z1 ڲɎ?g\I:} R"gvAʇ54#uR)H턔7s=\WH1eRH]-YG&f|: 7Ql_^|>ic>sGp$`fz g̻M4iǨcέyuyv 8SE:GuTW臶D(?Nc>fUOr<몧UrL6Z==]qtSjq2cR3ҞvJQy1ƓO=!J~R|J1.:O=Ɔp _^/P1$ >YC+WƥQ<{{+6G?=/ޞPţ@dw~..c{]jq,O;҈{ǿ9 Xd^^ԓ󉱽 }c7\?jYM |ti?N.ʸ.ӄ%vz߷M:\cYۇ9>akO>>կ'oy=Z}W^kdO -yqP6?? ųݳηrFD:X+&[b3_=/} 5\t3T^?CXXIi?xoZ?|B EO{sxsϒܗ7)γj׳j߬3.,|Ur0]y/!]{x~᥎im*8i;^|^ɤsJ# suc]JK.[=q)Zr Gvu=f/uڗ+#_'v{\1#c㐤e__(KWi"&ոl bm,^Zg+w+%1wz^nIDA3'߈[dO ;1sX6Ɠ8D+fJ󊡃Y1,˼bV l[e||!+"Ƿʊ!%u2qЍ zyzG]0fX[~g=Yv|a9&ObsܤQoU|_Dvj-zZJ@eМ<'w%L.m ȊgyӟWZ0T^# 0 ޝ.e-F?utҝb} :|qq!};6 @rwpu}!-|}黟x/Y׷,k7kM@|Ƽ?߾gg!5 8}Dd޵q$ #~TwW/ .ta&WΊ=_.cHc |w&)"kKW~Rwbqᾥԭ"v7,jKOܽRD[ăc1_Y!n 'ͯ@0I?V7pc,mq!j䈃dA&𣵡0jzh涗du˴K׺YеݹS[r~z9W?%*<{6Jb"i= _GZ{a_k[l]i{\ȗ r.jNR{~S]u/j&㳘ڸؖ^bÆo:۪ښ4[Zq+sr<%"Qi+(GUZ'0AbdlĂrTRo$5k&ayeb=;LΌטIn{&ؓ5JBk3|)mģ5=ۛh;vy)mz3շ_L}O_&@6g5S՗ Ou @L& q$M/f GkHKBMrהm۟_ѿ,zޛ%/Ĝ-+I*xxgoOޟ7듧d~{mk:4oϡn꥽?,G򝝟tpOwy3:^beML\mAH$xa?] :TA;UۜE9gtM(ʞLtG.Ulvdہpصz6/miayOONOli1k 5/G0VfJ %Y3S }m?u+nsJp׸|vc&ږٟNhK]y/e~XgpunޕyB{zhfL랶U:kd-I& "ǜ+HmE"+\ATq2%݄&#c<0BKqo-m [fv><ˡ{dJ~qi^+W.~k?ͺ\|{Ta?3b.ԓE.ھ|?|hYyU?;gq}[T"JnQBT Ɏ)-D(XpdKɰ9^0]fD뢲Bt_[byߧ|w|VcRJ[WR8ʪ!J E=0\Qcz\{D`U)+oRǤjQ3DI5u"O !0@o2f0v0cj{ Rާ2IdBDeИF\+<@91[̯܃YHZ36aŽ ͭ! vT вt>#X|E{A #klͪ$(BLp'amъH,z23NFIc䥩nx);A)p& !@Ja$B16&D5 !ؕȚC|9jݘdQ;@C"SOJ=jM:%$ ,tHJTap17+k&IFČ.: /6^a0fU!(dSoE2%GY="L!Dpȯ(ulrLaEQ9Z c׬|X9+0OEۊ*եQ Fur@dT_56T1 Ƣe\s m}b"vwhTK ŐZ:;R5C1TlA\֥="CPR1NV@Rg-a+D9uHظ\s(+BPA',>AѳS =!nAV6.pWO$(A2R6V$ e\Iҏ),㢘["dUG MRʴ΀|Ld+ z\@B]^<`̄ %Ōaր@|k!DR)PY4<(D@+I[Z"H2"3"xnʵ.7tY;0wp@\0 |Kp`^pQug=gUpOܠ*e(#80vVZ'(H5j*E\#|bJ[3)NR>0-L-Gku Nkj <"-y,?k6pĕ^'m=Z[Ӥj؛xm{uzXYRG)G(/ؤ3o~ޯYT:Vk;Ţ4$zzn@h]٦O., ষ&n K O%DYV~m2")wgݤגDŽPw!HD׺>* Vѵ~`NQʌ6"Cw"H{dk!)úw!$Yfਵ[V46/_"hځc v"Vڡc xM-p0]].5kYc9 kn?KM6rF+qἴIR,l`3^pto> k{sOV2Aw5pkJ˨h* >1ߡ5ʩ  gOkf67)?YX6B Ze`>YOog~ұ>|8Mې}D`JJd=v6u>sX0cؤ( Ýˤ &#)럠 l4q67}0yW0^XෳP8dfEdq+/4Z\%#{gDd_{-I#Bv +qc,e nC^0n`#QN |[JU=w\ub4%=')@ap&vv-u2ёـmc>,f7A8%9~$-|\W3od?W .E\>{p"cUft)r-fO`l]0d.2# 1By^ò ਱$Ռ86@PDp u\i<ڧm rj&z׷c`7};k oƷFd;^pC z =>v/| Oʙ["s5_2;4L;_a7~1 3I| ii}3Q M MjX/yW}^Lg\fg?]zǨrֳm$Z/.}ۋo^]\l:+~{I^K^]\zr1FngϽCt~_.ƀR_5liqǼ<$jُy,A7B0L# )O8FmLL };o;;=W6W :-N7ӿ6VD}lZF什:LRu4 ׅϝULYrtK{a )FI :Yhd2*+ ( .:(injAs-__iu_>!L uR*态?.YJ`'q2Fӹ/.6 .f$+< 򮽞O9O- ,S';@3|7AF.K KM԰J+DJ+D^inx 3sQ֔;+h"'x"'nFЍ0X ayzs%G9 ""7B?z-E }wEWpXW7l+#pLKq%N\D1'˒*ȻRj}}kc\KG@fjC#G8ȳ>& iָ}/o_Rxw!Bs>wAdky6ΝD ^vxҨc1A@ hoHc";?]ȁ}\$pE$x6j4ɥc6RڿNrd(M2&(G𣐱T9FT)2P.F)qDeRZtۦgbo: |&h@9B u:h |.K,c'r16Uonxf]?_~M~ 368Jy 'Ø5L&F IEΓ'O柿.|=y=8g~'hode>֟!nx, ? ?O//(U 1Q`=ԚFbu~iw*g)˜Dfw\Q]GvaR(ƭE"E[+i;}o I!Z;}o-CXͻ^#J$k}EQgF~by($ڢ5UH[5h(lm@cY.-b 8;ԜvUic.9p&Fd\U 98s-_2;p1pI%0SF#AE%K + ҇L)DA3Mb<1~0i/gƥO}7@>~2>|,`FVt3&?d4{G#܏S0:qu FTQN!8Ca Ԕ>3r̦&4+I$YvrxqyOcP)Y:=AV󰌣BPsx,EP$u:Hk4%Ǘ$:c1kbQ^֟Aҭ͍Y $8rIaMl2EѱbOv^0k/? A^$MF2f=z8fM3 'GgA}Tj旣emZ#܁ZSX1f$MtjЩrydJ(LRM9}dsjr$L9rS20޻%OMQ]7 Y0/fQ~]x;:{;dz]Xn?ѵ-M^9> 'F x K]r=vC|\KrT}Uz$wzFr-ɥT=#|hYB;G{lw=>i8FO7ypwQȁiɈϿ6* >';jLTWkNhzX JVV.aRbThް` QrCbMW[ө|de%vc;8=|Tڟ,I&qH +PII<-0zrA60(L;TT$~qVsA\@M5= os@RU݂"ZU_l>eC[HnfX@f Ʉ 6&3e)R0EU+]ioF+v1@͛$$`i+%(y }IEIݤ'b$M>]]WWU SӈŘ t7bcbH MM)%q"g$">!A=nç% T]%vB;!4WL`Tgx-4$H0)H5&F aC0XvL ,ej|sp=_ȩ~Dɑ1*ȳa./ .ac_1+Vk%KD4r EGq{u@J]aO%+D hƳO#`gI7[M'SZ8vSq0VE=q٘=J+8# }cޤ;74ohw~t`,ۦoZ E><>dGXnۇ^8؃մXk&|3>\ Rp?rkaBW"g$r{< 4ex΋fՁ#>oi/W>+A PR۵f*w4  ]Y Ǧ{6;>H 6Cyy?W1"_f/~|\-xbT\GaEt!`л;sَmM5@qrM<(WH8\#AUA9-Nq(cS`A4Mvo0 K |nJI]$}lu :E|x@3BI0N '(JTaj(1ʡq?$e00!BBq0.=Occ^b( =θlj`)z˜׀λ#G<)ٔDCgX̣En釛`M,`~3QIs⽋H[4/w[U:ui$j}L1X Bbp#Ib3!EbDRGB iYi`AДpPkud0(/'ٓ٢4CP;VQR:%fy}S{$DŽf^458 @b+Ue5mk\PZGMϴHE._azRWgQt9 aɿc֋ci/ç$BG`8IJJ# nUSM,("?590TSY[$̎j0}_޳[&W@`u*+)aM$<ź+`J##q3uE ;\\WfkP9yN|T3 ?@-Y7,}qaw l`b\]]~Y:9Mm%++ [D%?gM%WF`bv辭%g1Y.o[6[WۉkSXOHcJ+kbtB @y]/f!:v/rRU^dh4l=*L3QLy$W1s 0:jGם;/^cu%pkZPK]ֽ(Z ]5 cjDzĻ,+&LPes(b7߬c&&23Z(6h(qrRB &B"J"ISa@EH. p[`kWe7&0f1"6No@:|؊]|PnRvKh܌%o ʿDhP%M$ͬܿ0Eɽa\櫉M+Ͽ?ѯ6o9,kX \77/(}~0/^Q"Gq`!6+1ßN-bRrDd3ros`)Xlw ~0#p:|_[I,K/mèkGz$d 44ϣiճt.c= A8"㎩;eG'|m)(HNJFqgؔSS{q6HI fTI9o.y (ơTzI'mˆwFˑNYls-ٱI![{?|yM˩=J]W2_Yݑ=F!^ibu)V"u[r5%fdwCqLKL}"Mݤ3pՋn^:'^<A*&hJLBXnt3*$ca"{o m)L0^MӃO19y'Uy&xcRUZ j K!5rny)$T^%- f)<lrE v]nȵm g $ԫJڝ[ND(u \k9bQIńv1GXj/'\-83dbi&7skӶzԴ!J )E4:iwQW-@U,uhxh4lqRE{9 3 e@Ćh+L?Z)KtSNIr;nTcY/7f /W7r$DIb"W) <)#RP)u>v|fTPJ!7uwf;dɐ $v\qN"(u<a Lv)sRiC2YÂ>Vr\=5-;Qe;)4XAi\IrG|Ł8r r'qMP` yqv`拠F~cQ{JP )Ycw,T]zpD6ӧK_mpQ Cpewܘ+Ucf=7*Bx%\|#k`\U;yB(YRu'F)yW9_V`&l!cS s7Η^?^SJe~E[nW'.N T D"u8'r<0 +K?:j+IV:܋! *y`礪Q5U<+Rkh5A݆]3d^O YDJw=(5ǔ/fEv癄|sJڜYs>-:#q҃!>(1UxDp1Tɇ}ai{v뇆HˬF2eֱ"0c>gxDss C14Th5W=I`"ѷr9gU"v n2ᶿxּxn!vj!C8\!S^vz Mjq,<݅2oG`{ܯ~ivC܁kB1=TLd->٤?OKT..u;gb6B-O/fY!#q;Xl?-PSBp5 8tCal,9gXZJ?eɁGX@6~;r$УD&w;!疹;lBReNŔ2jq,Q!`IPBD4^Jˤ$TLɓwLb֣4tRՁ)m,[d' p\]D:eQ>uJ<:b9ዊ{0a%Zx TdXڞuf瞚6[:=7V.\㾃ͺ8׼ć XuHԿiS&A"y f=i#Z)H8r,風r;#}tN:V_ceBv3t};/`} b P$QIF17%Kʼnႚ#p'RGd?u9 ߘ.3bC2g)')%`m"$$X48(BFGBUN6TvYi?dctbks'vٿ:& f&b<IbI"YbbBM9&IE GHG[/2GZhehL> dR,i_jJS @3f[wAH"NQ+jogiupwnKvBkNJrf겚`Ed@a!!knNӃӃt(YV=U^ɧ_4ϛ|+we=nZ4d?i qe4j;-]aEIUY, ե|<<𒼼~|8!^㲨~̴c1m:2X^պzyzN!j> )^#1\1)$WiZ?e-aruĀ*"0WZz^.Q N-&a^GDQ_>&/|G%۽w*^lG&, 瑡gFoKA,RU}֘VSG\}1؜#Rs>bb ӄ}U`a 13<Ї BTK=ʶ }q  B"t0-fߢ]ڂ>IԦQ),aG#͑ *Rz>F1p BB8}ֹ|scZ@ÂлS:矚'quަ8ƴ{fS"=G:H+aߥL ArHBGȄvsƛ,cQW˶|B)Id`sL.[>0|bDZ> s4gN Dhi}5>x1U3bw{6bOs.f !vp%l~}<[fl7PVp~n!Gbv[p? _wvw!8 %S YLX⍩$0!`r.}i=9V=̠KphLb,Ĕ 74bD8tCI%8e1mdۀ>`07}:ޕ[iF(BuLOHtW{sp@D+/ynH0wSEf=\9!5%Nբm3!!㊑)rق<5P0ʙNLLl؁3c ;py3XX.<3r $y 7 O#p0!ۈ y x!NBvB¿^_^VeՋuڌGܚdC+١sJ[JMW?vRC(7ybߋjm؏۟޲ وt\w@UV`xMGهUgx>zlݨúOGpE ɴtxzLJOΜO33VG:h+XX{lO>!\Kovs\GH_ŷ@HWq=mia&<cpJByF9F %4HȬ$9EY `$0Td:$$KN,g?, o_KMX'%`C2M+ )͐*Sf6$KYKQR墔'TuA&8!p)c @PT LӸ]pkj k`G2؋KȠs'iN23/!cJ{Z GOmJmbKlH`goB9PCƮ>Hp"w@1**DDoPtlR:LlY#P% VٝT0ZQ]7WƨﶹR~<<o^ܝcPlsV@ў~uvEdWM!29uk[*Ld[/|!av{ÍN4'1f);@ԚFyޝF!8f68h)Xe.3 GUM%Az· t5[],f<*\ü{<ф(oXw=C8pNێs> &l.YK)("c`9f9c3$bUf22K$C4)-L\3ZpRQRyeBԸ(xjzP-eGV&Gٝ^ 428oU;g w0+$+bB*Y,J!Buʳ2Ig"ϒ`i!ʢ!`"}y =DuKx|%ڦFүXS'?w[1- 7OITV5NVQ2ՙ#۶RGμ}&!Q:G$EY(KqP\۩.?'m|ԇl ~Y0VrQL%*3.SUB8)LS(2QPVhQv~Uw>'\ |N9LCL=V:LBqXS qHZv7]q #r:#u'3IEB2׳nyDMBhяg ObaphK??tgIb5heTN6]pc̹ޱfjc6ҝ0ѰiQ;є0ycf{dDqbCU`g&1bJPPAW`8=Um=!ZJN;𸞌_ҝ^<&Qqmv~={qu,V$j}*7 ^Ҍ1%\dtsZ}Vu)3 BƼ2[e|W|N0gCMfgk5='W73Ж̋k԰iϋQ98ZŧǴQ_ׇy>\d֋(͋żo+jfjl 3}-oLb^:q <OE`z>ֿ/bIϩXPi8] >n'galPjǻG?]?wzQPBф Kh PQOoHQ4W8F_}tZ=d]Ŝ4}xDFwO?d|/^MO[~T'x2L7\nl,*hgPC>ޛW^M2):0g,VUҴyTVȥ6g1J R[}b0mx@m*7f'/&p?O L{ sj  N^i71D ]2IOGcOP8ѷވP@|~+[|qRgN9ÁTqFtl)ʻn l4>fBfx],z>qJP`"Y'!}<mDڧr XqW`BX;JmZ/U/1ԔQmGQ%eT,/٨j4}d߳4c6+0`zu cB0ܼ<_`>żZ_ݹ6O8=[`[?Ɲ׊ߏs:;UYoagݪ. !!)B+&0խq_I\wR 0q=؃!y4A_Uێ0ip?Z,U6Qmlx|g35.+XѠlh?׭&+%niF7RdQF(d$|WvӠ̘nA5WO_=5Yn&(j vTɘ"E?NI=kǣ9nў_9VQP5lV{&ܾY.Չ ZF?3' aF֕eamw~>Óm@P&(ɖE5-vIg37h66II]Lre˷_E?x2$nm7lLx&X."O{Jc)IeEAmCism N(1ly[8{0es)XJ)MһuDkZ.ƩSY\XPBfk,XДDfkFrSIJ ]L/*/%Ka Ub(E0FY$LyzJDk'Jtq;N<5!Xu@Y31;gftĔڪrmoϒg.1׭+rMhC3cQ+<즄1+IZPL+IITeiqXJTR"ђv{ɔ2-4>j+Q}8ryh~%jNO_YN=E}3QΦPvS Ii_JPT`-5١~װOLi؉a21_ψf7/ FOdj7ji>٦fE=V s_ռ2]fZSCzPC_YX!At\4(><"H#nUX\7,H/rxaV \,m=QUSE&̚YW=것9Us1+#kcR@2d^$%$r,v@(#hj lEQˉ~!~sL?ُd-@:Ëd8,1z5!Cׯ$ ?HCOELAbґ\t$_f?Lf>&K-'oӚa]@dRD SQcV9D@)M,-n\j)N֌WS}CS.Zl ^jJa'}֤9~{Q&W2%i'x@=$5AbݩwRiw\l`"D]R K.'Kj?}-FgcII")[@ED$|>{ofZ4ґ-SU?UUi*&ۖhUTZ_7G\cښ:_A%jn=UɖSͮU$˚7$V y+A e233utO_`R{尥 3_p#vjBS!hGrۧW&rEtBwpz4wBF{'41Jhςt8HJ|DYNn(yÎ[',^Y:< !55q41|WQ 7T#ܼ\t1!UYN+l@CVij#ΦJ$*"&AX `k}z[3h >wvh[Nľ! aa hz?_of<0wa˨L(\ }k2ا$or_k"Mb÷5 ;ְ@JO^ߪ3#9<{%L}.9OE̞1z{cjMk(染-n\˃T4>llrWַdbn|ݏ~*6vCk9vCk~6BDD PZʅ Ձ[IrWH^ jD Vhʩ4{^/K-ɻ2凷v5vflq9X.p^ͫN%SCރ):*fĈZ^K /Ef%[Y2Y ΍tAE4Rt0i'Y{dLP-eգ~֍8NvզȪvnM⸚hSf~r4L|1znYvH߇ GG<*axB$.LIk( R8r3'?ֲeoLΗ5Y$y {]O3BΧ??OOJk,/'t9]ɘ{Gϗ/y+".*w.Fs0r#p1XPA! iV!.llFvV6c+{h"DȖ5*n}'@󗔪%OOY5ת|JްʇwP1n^B j?LoXOdMl֊-wm{*p5cx)`(3cv}T9q/NCa@Q9UU &@1m b֢]%mvvrbFlQWeϰmuEوE2wKX{F_)V3MSvq&@{8+EO؟{1/AVĉ1튢UN) 2 :q1/uY_1|QunoAaA^'R UsxN$OX&%Q~R[ΧKtϮqv\sSO`F!@|>vLPv@3ak/CǪ"<auɋBu8`Ӈén9DñS[Prqek'pS )0F a"&=Sz`DԲpL?N\ӷ|ή(m`3. jS#@&uT'vsWri̵kO\[,D-PN> >=e"+0TT$IsvGqeYׁ:Uu|;x![cPz,iKOJ4ZQ3|Xztǀ.)}mԚ:FbMdtF4_"0b.I[awB20BR=1CtH'nfqruY).Ё8[,߾yC:&}Z9|q0瓅<|m%zhuY^vsT6r;[r0-Y(TZz^Q)γݣ X@%%#WFi F&xT$IATQ24r~+yߖ }\ kMu0cQ$);p Wlt;KvKOk;mVqU&ڒKrB5S&U>PtP"u4A_Č\!a nyWgBt#&ivu1Y1Y6j\_Yم89|z4 q{'+lVf~,i4*QyS~[1KkCI:]m+ʆ0;y&Dtė"ڱ_8Rf3ؼFjvXjxB<1#0BՔ& BD)@| @| J/h5GrQ7,<3X2@g g:OMWv.>O4Zc:T2)Q 2*Gσ-O{|P qʬDeS5 g/(xϲi[gc_Q{$D~HХ$I=TD$35Krt란}_dF71>L/W߱i=';>X~'r۟wTQ4{&N,7 / @gt6J]ۦM!d"o1ьˍ"2cI)fr gMW-ȞPu?|.WQ!3:$Am 2c9  b<;z/Sʢkۨ}SpYE)wb?e(n~ *f䯣2p`j[d %)b]0$$%GI)zTFACAG- $DD-%B"TQU#B-{qW9xq)ͺl.eflDQ+ipJ% H/IL(^D .<(02~Rgetۙ:ۈtEbWCi(!SЙ]AZ*jj.OVM/

E^%d1>\pY&Eޙedhʃe:Q> z c{R |ztOc 1j!8TDŽ D;S{[%ΥdЃdr?@\h'eGᎹF/nikDyqƠa9n x w+hYD!>A߄{XKBj[=lwG2vU̓ϣ7\#"gI6vƝ';{s>uoCq$7뱻,P{^|Y=_}M8y<셮1AFpA(bAIH9|iY@)M ңp3*:;)= 'tʝ.L/0uug8!/?k ZK*' 2J|**EN&*FSL&<;\mLB&th?k#D-|do@{̽jLא% 1]'jL҈gB1p5T۠ 2,OȽ.Jf6iiC})J\Rv n֖ڣAp`E| p?>xP|UO3w5QfWh ,h WRP`#jR w7S?23}Glш?)r9o[^<<^ĵl,=Zǹ6kP^1V@K%'(M Zes4K41aOjTԜ АTci @~H~KJT1$SQj>-^ǘ$*ٻ6$W,v%}a{FqNYeq:|bF)(ŬʬCj *2⋌+##2cƹ"!\=ss)sS,`w )ͥAP "UtK&@ls:Q2IpC%5B6 B .d$x5 |P8`*3*-~wF `tr&E֔P+C48ctyU\0N'$ Ԍ(X_oňFXgmE;DVҀ6Spp.r iG1ZBX8ϩXJ܂GB`9r<D䥥<Kx!Xi Nv7%4Bkm&5YɯOͅaǵY+_v&rjַH¯ܝoj+N}灁@OzfV*7 ԪR@<$XH%!$!IKhɀ%&%=rj c LB$&cےs(PӚt!Ri|bXdEX}$U`WR*0d|jF0iv |2 ,ŢIhѽP2&I"$R^WJ3ϸCy$2:*T@5ԌsQ3OeBI=kY.6E]˹r(U=eû7I8p ֈ3X[jryޔ">V_'g^Yn'¢ssu,'Ha"y 3Up.Ge"J*CYU꧖V0bƁs^V._u{$yY; Ex1u{|A$BуϰjC^i_q6f/Z$ bջr%¨BU=V3#V/*Ej5H89#̉SIm-}V6tqbu.UBYrm1%%B"a(sH 8oqQr,=_umY/[*Aw 򓍏.@<x5i$Y%IW}륛]^ׯ~pk!gn _ɦqjVbίfX^ `pPoYmeuBͽ@m/Лƭ[gvZnu, N8wZ$^k)[{V bZj(x(kmt=8Z]]}4NzL&47ŗry]/b/ 77|Z]0Uy^JaY;ހ&UءJU8}߂HFQ,`<6UH_hM&#x(IĨe;#mt p1Ҥ(1FZ47i!\!:[S#CA;$CѬ!>/纠Ќ*(%1uZJB?~ h$EYdl]LEG5nIG+?^wSn=Q6tPY":NLcqs+K*Jӷ7g4cc[fX#KR`KfEIU`rlv.QG@!샖4g)?&MVۦwd?uXfFȾbOޫNf.oDO 7MOϊMmZ7>o}Y)ow[G_m~`W~",Y؟oo`_,Wj~LYOM/im@/I}zvOTcf3}4 r~ [6>N J)pgo?LLB`#oξg_«0Cl;n0CV7XLoVޚ")QSqd-D7ݦ§"mlY+|KuWӥ6Ob2Mj( ~=N %-ƟIv'CaVB5ܶiW+y8x{|G""$(֙(Uratͫ {<ƫc6y=_<<9!Qcy,glRà ͓dbIUQ)+NL`L5Sl]94vL[2 };hW(5ۮIBfÅ|\`Xu6 zx@FD}kep.pŰ-A7>S!ݽ Fty# JQjvqIFn/QSNE~s~9!EV!W#O"c"+`sx 7c&ZA{cmw!B^ x,5,VX(:HcOQA׋ a K9e\ ctCm`Nj鐈4[ /W`=F1mL *@WP1? $ioؾ­~7[! 1xk?{O´Sfa7Զ=<Ŕ'\ A)ӃؼS֜,1c#yzpڼqU_H >4|0)-LC61yn1R뉝F7܉ "4//yQ'A=2ALm0a*g6yAOWzeN49H'a#X"3K] {cR \ޯϖ9׎8 X8N˼y D%qSCJkZm9/֡( dh=+[bUk3k A`ieS4L9҅|m^J #S#iZH ۋ(]!j 20բsZiJؔYǙb$ `DZ5mrIM eˉ\:G1rD;i- g + 5fyO9rLlccE(|rM|F!TRQ .Bn e!9'R=f}tFtȾ~OLkcz%g0/ s.񟒅 M Υ"@I l 8LJ rpG.Gj_G]~S{C"fR ̈́\&VECV+SXo^`K](TJc'p  ,I ~db[^P_'^;vΎn}kғo" A=IEzyz1@Jzѭ`;*d],9I,iF ~xAHlcIdI X5B b[ Q%wԂ]!~fҥ- ćӄuJê8XL* $bJz#zJOU ?Ŷh+QKœ:p6DK"ynb pk`,5ӥ:bհ+j6W-X :Y pQ aWȉ1\j-(!:v@۪q^d 6+BBAQbkU鳶Є^Y` ?vMcT{CN5QR( s(/*J "$TEű=N7'mjo\@|}5\^U!_i9Vʬ^|1˓ܞl_} m1+-Rv1C!ZbLjK)OL|XKLjăN-ae Kp40o޽?2pVnX^KoWק|̑aZkYWj0iڍm|Nkݯܝox;Σrçn=3yU9 lFHZ΍B-(PO'P'Z$KMo(JD20 !φNIIzp$vAG84~4<,v ND,iKہR;,_&*mSX>e'6pg@8)$T\2:RH8~~TY_R+i/KK!9OTRhS1E01^ =+RR遥 UX{Y2wTdO&W ܗhjIT$l.Hf PCRq#2|8D aj7z4GYrm/*ԯ Vuu07}l >XM٥we5-s˼$p;$DO?}x&bA?¿v ț2zVR;dXs8\e^䰭7ˮZsǀ pJx1:Du,og AAz~NoMg=_,vb ,gF ^3ok6?!1'?fBRmy JNdV2QZ8`{=vIIJJ-Q\` +s'%h^\\%GR?T;DQQpĮϮD9ZʟAsAkXǖ+9x~;V]&@BNj3 j_*{5,@?]5.}Sg85_S{&l^R&T$ i"F댕XfD63FrYB4ydk0D VNdU6,Q˯ًpEўw<X“s1| ׸BNiB}`JZ5VƔIG1u)Q29U(ISī_ޱ/z55s׫C sE;K't% 7|ݞ!ڵgǍs{6f\VX6 liu.E+]Nc%9H,dR Ɯy6 LVÐLȶب٨#}6o)IR2r unL ̓lNܮxX!-;5O2rpj^y~_5||vZ^_p|owgKf]ƬQ6e)1rYQf;7w$ջ&\{cnn6lȶW\J br M̶.e+o"frZz~XTˮl3]5tkyɤtq;f}8/RcyǔY{~Uje;&r{F{ ;v^{7/㱩\ ڝi z;=`Mc{k~zp\7.T&U$uMԾ\>d\|{#$yoR$%KV%{Vx}sދipحKCSe͢zbl[S2nnm#02{ע8=[<3=K{7g=֮ژ͠E,Ff,  o蕑X3ٖC< W*VIAߑNOAn:scB3w7:+ ; G:+,]'Z 8~iO#"7pH"Sk5^[*7hJ U2S T3-"`MɄAL@3zQ>|iZ`kI / -24 >'L8.&JòF[ OTeqo}ֹ#vN6kR,E(pv\* d֗kvYᴳNL Vm&I9Zij̿\[]\_϶!\X_cR0q>Ma/o[P <{7Wzi[+."kX c!["l]'j^:n]#E(nt|CMWYc8^דk ى@ƀC"q;22jԪ&,tu(3hԈM[K7vF'کk6?_h nܗ_>2p)x<8ھ7u!3wRduQZ,+Hw/UnY1AR,d"37+@T2KojjaK {]k"h}Ō+EF -xf&[p yDg,80 ä<p:8"a"Xrׁ:r-Z'gsqZI.חznWY<8(-ssK{'?] }ZDQ-bu7$@0X'-'m!P^@I&b ³Q׋IJg\ٷCc4-V*MN),J. 1F1$Ş<7&79 d!Z׷.k+C=?v5`Ifƺ:1Ŭ3ID2R{M"}h)YA!C=4(R?bDW1ﭯ R#vj`v"#oUgCQ7DQ,~]|3S܃-x&i:+avɅs"c8o 2.}?y[i^]9=ŒoX<=Orvr`OTe?4I7Ӣe*U ȴk݁| FCE^'Oȼ=YAV7Bk$t0Q;MeHZvA{߼wizsIXpY児x!H l֡cNrh B :fH6ڈ]ɦJZ(q 11 QVI*yE%Sx*g,vH.R$UҸԋw)Ӷs^KzSo'WcʎTѻJU}X|a^1_ 'AԤ0#k_bd٫&WKr,q -}`<.3)pkeedr֖]6)Ё#p+mkH+ZRao.0*j|Ye\Zq|V:\GEi"xzoOV||[9ȅp{+sI> ÙtM1̓].>-d2-XæȒɨo>D6hk- .t]iy>BEJWW_Lm!yZ`xL*dmz11Xdv\bVr2tp^;طm_ӛEX,2V#@/zM l#zdi/z̒Z!mDOkX&B%i,+~U4+M jRo6=[QW/ӛ1VeZLצoɁc->[W~T+w)XNqW9J#g/ͬ 5(Gwu >3}K@5'|ꤘ4a* [kmRq]L^c e“g-NJ4EF-*<dX]("m0Ѩ8MDC"YS4=;ds#"RHLdmmX 7ň\ `m2XUF75_µ0P_.!jF#yi2ĞNjܲ߆eY-trm]h?͆o;FVr'%'^ 9Sj299mN-ˈe\aDk#ʒLZoMLC3#bebZ֏.^O+'5'-tExkn7C x/4,Mz)ADhk҇i('BM)HX zm(@< MmH;2nVܵ\\X [@.,;<\njKJȆ51Bʃ)#vɒ4/f_8GGDW+~#1_7@~DpY12EH7?j^.>m1̿=0IrkY v A v-M @R9<]!ir3.:~gj@Пhmvpf\/ABXp\ N,s*E2(=$0UKJ&z:HPq~G/;o"փjj/WW//K FY{KNw~p)*^'K4+}>,s?T%ku -FiGBVg;Ԫ8l;l7!zɝPDfC/d"0@Q0V>&eҪąB"GecZ+}haש um"#{)q93Zlt,Pg"۠9X>ldea ynĵ]ŶpxyS.pabҌ3̆$PPX@k~\hGK:Pց0wiqW<;:srzrBBi/8YSڱ"_axť5е (rB ˆX"CG!pVr;dh{CM'A89~8ط6zn4cO9'P P_dlۇ^bo1JS.:&0)$ txzլH3I)NLFd*-9$}CvT*z\5U4T.mA_˞K[^%zw;$ {=xFEz6^(|`. I ̣HFK~*ia_"vϨY ]3)q)h߶~.RFFจY *:Yzdv OAze bp)x'QjgU 3k4b)ti|HWox> -GH1ՏbQ FU˸d>y%xĠd,Q e_^wEVjs޵6rc"iZ_a dd^6 E5kK$;)I˺DVEHrH~s!e`R09^(VʭlD6*gAq&Jfwϟav{ `9ɌK*XeaՎ a)/#P/_x2H%'ƁR-Ȉ4`H$H#%NlgCk"xejo&'vX-bxݏ[NƋ\};ɞ-nANC\<44Kֵ@~N̕Mn`ռ.myNKVɆyԄ5JaoP3ۅ)ON濏60[o4 %nl~C_ipe㝁}L <\O]wކo?e:J9čiwY@D^|u!ٜ@1 OYٲ5x}:y Ra;-XKMiNpK.;b[˛; /Q@/vXl(幡MuN- Xn]}04vyD]w iє&l]U5׽DIca; rF@^[E ~C+o0Z=$ Io0Z<(ǎ@׀uA7-}kHCj5 Sa8_i}D0|#B(h/FR-7 f\q(,k@#΀kQ80 G[ύAp80Tn L k DaO=CAT0oow<#cȹ$,- v0㼎T:??%u-H mnLp%]o.gVHբʞ轤 o>=W۝zI բtYLYo}%Fh"=SV LFhc<-RiB1j:fm;kf/vO/ {I .iZ}R8s2^REw~VfZ3FiEz$5=O}kSy"}Ļ,5~UH 6H*HCFcLt V;j{QdIc&@ϰjxHB<º$MGը:Ja9nJ%%tb T}7 *}_v{H@MRO%\n?\Ւ4إ {jN&č~:hSzԈEcgR\WI):vjEm#G'8t#mg<0i=7kj99MjB_ ,)_SDɘ⌅< Mk0TH ob|W"Ot0&E>csB~}?ʱd}0X'Bnۡ<8> d'1 sÇokꛫ|ytQ"Z"p,19GHkqL 3w&3EAԦ2s?= r%|egƓJo_J 9ԶW?K0.ؿ{^fArlQVe qrdMrBƀS*YIZ,g Zf n4BxK7o[iS2F#⩍uu`ݕ0U5wmz/T Nl0z#R IAf Ji9EF ,wFԽNlg 29չɉs UU`'aedȘ(Pe0SJOY0*!@!WR!i-ĨɼP il2L ~xQFCAE,0lC(Ip2pL UTdT)'4rk] Al,`0G`bZP'wDL0`c\j(CEse$APp Y^"BSB i*5`(nF'jr "bA $3 ,593RuUt&&eLSQ/(lt8w H; &s( Q8`e:کAto4޲46ZüM?}SnۘZf6+u[A뵊_O.𶲼@˾mt|S@mޖ~a͋sSXwnG;XZ({El|NyF)~ `T @Jpaur.PAj gm qh:LS3y./+vCk+Tiڣew(OJȼ0&~k1ws2*M`gg9⠿,=f9sYs0dp!~F1.p fvqdbb,X⋵kcE/.rmqftv1d/.&_]L]w(XP}v1abb5O./.&_]L$\X˵\X)}g`9έq9C`gJt3r$` pEy )kkSzbJ/7^LƋ)x1/r4j^0]LC1bywoYcy%sҬga>ko3YK*9}|̧3co_I?ײeO<չQJԕIm?kF7פ;Qr6ΟQgی:{!\ihpҬ g lަ 36:Οuf9>nZaٔrq>p`8Y^v46vvfWS0 ;4]|Wvnw3}+<]TxW#㿏>eyo˗S;_~d#K։/#_,#Ŋ0W^\vQ!|~a8YԇtɖvrxDA,iZ'vRn!8Yԋ8| c[ց1/uXwS[THnyȚ-\8z`mir.h7_ 갎hxqVi@ͿҠv !΢^} A afʐ͊b11RB.5=G9}Z( Ӕ:g-gwZ@#x@ LvTѨ'@3!CFj8# ތ6|/:#@3O\tv 蹤rqJ&Mo.fddP0J6p_]Lz.FߨloosY56.w"^^Щ_1nK]EI"xEȣdno>/:).?;՟wf<]p~g8`ק.mBBV2U|,DS|2w:w8S{bRe $XSNpN`in)Q KqYT~-zI! ٵqW0 &\W&uc\'4SZ̪ [Rr~: p:N.:dr3Ld9L`U0UJvwS,^QeHM8Tia,h5(޼Ļ}ki .:C5@<5ނqKK *⡤O'ZkԿcvzfR|nE-N_^pk$) Jׂ ,3ǺT"sf<Z")4Z5Ձkp8b"{ \ Fj6ns/sBz75/P/8z~,Ԉ][XSV5@!cK%P1"mZƓJ6BA?}DPA(!;M0Egy U^P1>ʳq,DΨ3_M'~%U&?nͭptHE0+``>A B9QS#Оk r bͼDnJAUg1*igp`9&&~~mP7|o&K=QrT8Xy`hI/[vӅV"\K' m^`qQ1^̰7,σy3Nr*jy&!EUxo j6k]{,n M m BF *#qT $iB%6R+S1+XCkiyas*`ԜҪf~K!P慙.A1~qiiΆtFg n6YG & 旼 ]txӟ~ f qoM W6aKBoژ72ꃳ~1zJ91[Zlr<#XH;Z0h  n ̊ 8mupr궠!_6)O~&YniBwkA4}6MQnnmhWtjV8Wݚb[3nG\EuCA_7א\Ew)IqsIt>L:Vs*N<ãG2$ҿt~4:^ A6 A8M'X jCGpYpYWV+T>S-z%2Kj!ؠZd^܆(`0P=6JF(LMNOƎkRc§ DHދL ~(fV)];pyϨ"/_x-] k4)XQXT18,&DlZ%MZJ(W*1}Ƙ/+DwRc)XɍS^8aC `< QΑE$u0#EP$hʶFcD b0QI. RDb$՞Hdz 0.9"rEea Q+L IW0"99VXQ4Jv\$]Sf Uyۗ6IrwW!XZ~r:Ycy&[X-y7ܞ?N[D}}?FKaO쒫zz NkT:o׾l;p8!4\Wis.M|9A\ϾR;*T{ җ2O^3MkF\U!d,HZi}(Tqn4!Ƒ1(tHCЈ՚h'\K2x{#%_۪լOgNW*P~>%#RJ~+NF\;O]w_\t"D|޹\ShH<ދ!k7TSl ҊvTc)@דv?MIޞ1G0W>*VHiL:Pέ,춳;qD`4=#},NoYgjez$e375/5zeEa-#%!Fe-;lwqcIK0;N$LL1QJ-G~M$o%X3P0"~4Nrq缞oѨPt8tXㄕU3»b\ 켺~urR`~\ FO[pp/C7a5$1C80@ liCGa/nў.|^Wq]Y_).\s<\4w0?r|w#U Hw>:Ҙ+41E]0; Xhj_-bکl׳/M@`+L!1eʴKB</[T@:[s¬#>` L"@8)XmB>92nV$O>Д f"n#vw;[\tGB"tGuGցt@+ymRZEG Y0$TH t JG嫠T{<ڲt6W'9n,lIã9_z}U} }%:{wD]r{JQX+3J"}A뫙 EFRr~hG" &~k=QTmr:'1-TRTp~FdaHd҉`ofY Gf9LpR~ >ϟ׊K4d8W zhGF`6}b{QXq#6 #XEb^P)U,H7:n60-c:H@a&PӼ@x"lYkƱ oDžD@  :e,QJFAf=m@_pmƚҼ@mVerSK1Bi^ъI#*l)H{K<6e13d%ϖ얳qWIU&J3o*&h#gHٜla(q-A |r^@fO|^;~RvT%Ysv0{)h}R cSԗWٝ.ȐܚSYCYTo8,.%pKBw4ν/@aƳ`hEˡGq!{;Wb(frDcqph󠅣ia]w\yGfdt`BR!c&L&cD5aBBGw'#< G+ꍯX *OdBktrI*4d5ggeK۳a&(ZEsS!,wx7s"|1"l[Sa(pd^X}">cbD8yt>@b}t7|(^p!EG"H#mx5 Z>!YF \qJhŞG@GbA& \DˊƐa])CH9zaL>42V{Fcqeg>>+ۢT,s[xibFr|*C>޸fw$t+^k>*97:b`-0>SZY{Iw=_I-E'iфVv{[?࿫ZiZi6k!/p )Q)H[M8&D 9pb+VH 9 5^ͮ{҅^MRG!OBkYkNR}❅3ʇjQש:_9'Ua Ӕe,ôeh 660QQd^)9fSéIh be vBE]xwJx1DS D1a{i* i* 6˂Zh4 9 ws$d<:cXY{ B ҅t.4GI齛;Df/ILg7gINo䣙\ۓ'Id,1k!y $JdΔ[AZ$RTz*"n[QYO)WLI 99T.U"*+B|2PʨΡr^Be/cƛBGK`_ F˕!?.6*xIC5ha yS4t2v ESp^r"Z3`mL9l$-eklbojEK¦3)V꽨VәQ FuԳʿNSuӔ6ͨ'-9g0XDZQ|V'TPgNҞLVm~ztn[J~9J%S^u|`+;[rVn;-l|:0Gb j fT!x&֎I2I ςfhK(EMw*`#m {mW1+KefAanӯOT_kzqOίҗ2H`狔bWodA_|"C * GT|=݊ 0)h1vjSzߺ}+ݛ_Ήȓ_Η-;iET 6nOQY$}G +ęggH$Bhe2"qmDAr_wI?=: 7S G)eWX)FxZ.EﹿQAy`kIqJpЂ8L ςTyO {(e`9 J2A!b# S-j "gU] rx )2hP3ꢵZa aCxAH{흠4ĒR#O g#$}5ooߕ2]LWlj(냗WUOOՕ.oU*$$\~ y#jp:e.oN"BzH_QR?hgŒ%^\_4m)1ߞ\U7)ɏhra拿-_U즘ѵ1I64zIoU MKQ GAY9/5Knz'5d9&{BG > ?MO1EuQG+/To΂X}=&RFc Ob]һ>Yӣ+ :k ?X|}bqzu}]g>mV WK~ \=Y"It[ 2lfm>K )e}qWTd<9t`2Ҿ?ϟ\o@=NQ6p~Ekn,QMudV̚1o Uv#in^e9o <.W o/-/E| ?wE99p27 ,d2k ćho_L|. nW߇э;E# 3DD}irr L,ƘFS #}Mixf &5h! 뺍/yꃌr͊G!BwS2 zd:0#d\*,9]fS!/qlvr"HBѪj<6XrmH1p88884~\73O8>Nݳ =1j+*qe>NՂDg(@CHƇt"?~X2عHu3O89FH*wZ{>Ș#l`x+{V8(f1esk 9|sWh˭x$ĈhMOb/S&C6lv~el/q !p/KFr+ ,B[SK_%=.OH!yɊzHzuN=s`:0B3*yK")F=~ѼNSh~WcXҼ YU6fwߓ5uar:y] oɖ+(u~j=6qqɌDV=c&<8V` ̵4UuU|꘎ iը}׮{q5\N[3§AbK^IQͲ^gfge7(v'o.igqKVTs54JbARc5X`GZ,>"P^(L+^wL/Ni J<ogf+J{`x}@E;i [)x{ׯ{~W:昬tV¿\6BWGV/,ykpnV1dK;|`t^}reX͑L'lU~muL*6DƨuzTXx\;j4~ t!'N.m~=|^wݪBFOk. {և04Vvׂa>n:2]N#o!44isFPUzcu7}Ww#ssrzP \y> ne0c7Mv]U.t5C" 5c[0=VNݸ3E: ]`=:z Z`~|Qs~z/h_`X-WNTe2S_\exe`5yKl0o^mM*hr]mW{yjO៼q۬7NWv_ֶrEo._Ax>ĜL" Wv+b `ް$ vb˜u/(\нe#[FhS La>A{#=qO$%]m4TI,#<36v/|;%P@6Xp ttIcђGDpND>"GDH#"}DoH?.D`(d+1p2r"x@G@wpy(&+"tJ6-P@9 RIK<">)EHsM?DGxP"kV!"JϨ2Ii^SBaiLZ`=ڼ>–F z݅{fmAQYk>Ih)<?W&*juة d=`ճ,*MʌMU:!w× V4 :+v}_Fݷ9%ѹ5@̍Zs[\rYֶ95_'6 ۴' bFV:!%o5E12)&85 bMZ`*j06XOi sI`- PL/K_qYc81ppP).\:3)>94BfL|RQ?IF PZn`2 taN K!͡ڼń6)EB3V/rʻtʹV ٤y{[Y;KPAqN32u$6ylsa^𻱀k7SxX!-m,C 'rpF/){8L?"8l= E)EƵ'tK-8UD/LDA jR{%ET1%Mԭ%=׎>$gg),Nwj)ėkEtnkI\9|wQ| ~7uc|ͻUbU\y?]_ԏj'/~6ON^! e/` )ܪO__'᫃e."_|juɺrZDQ`wevu. 5׿N^ͼW};vZ0YọU˔w{VGoPQSd`>l{GOQ)_1s/n^!Xg)(7 ]/:L҈4DRP*TcLk#/3!UpXZmY_;Q_l3ukYSY.WS ovY8ݪ}`K+se=6k~olb?;$XsY71)R1k%+D!2f0t%bp~ iEp`nD6+ .Wa3vq*ϡVD1zl2WrK?gwcD\٬D9C%Sx,=; F g^2 ;tp|CpV(~"|pau-IkkC?Kl>ef69$QӔ4e?MYOY5^E*bJ"QpEZE~yb%f?=⮢/bqeZ3'w-Prg0w\N% _,Loh_Y98+>iеHQCS$t M4WDER/^!_`79yO1k]*7!᪫H4d1E4Ul)!LB2<*'D*xOL{ ̍{䂡 t[{ܕ= *&S ͨ@'/7.;*QuBT S1l =)~}:GQCǑ:8m#8~z⫎s0 LqHBeȌZ8 \>rJQd@_]HJ;>HaQ6ӔM4e?MOa LT5tEX$ iKI4J=##Yŝ3٥@6VhWiPJ%w/W/'m^BJvk;-""W(kiwqNFvXw< Ξ=A[vgNbRQaS^x͉,T19~Ս,M9Yr4d)gp ؒd0߆6먢z0,d W"Zc(Xɵ# P)-N%9[rcnwm0}? v'8iqSKriѫ5,)K+ć\Movvf1vӋ?Q^j ^/zS{K8jxHvgBs!~&°J+nhC?ɠs.)Q&!GOޅbK7fyh62C+3ϙpjB;NW0f>/Q3Ǫ]^pk>rg{0w΅FՇk;n8{XXP/NBHOPvab , wE@2Y`xnfSTYT(gkP0b0)9EiHj|ɤ3Y> X.uc i8q^Xl=nw+ϽUN6R aۆU*x|6fɤ w7/Ⱦ0HOg: XhpCwpW3-tp0qbԺU:y0xH.ff sW.MM4P;2 cHWCo@ (tǿ+n>[? txa>r&=`[M} a@о@W(00`4OKEs/3`q!G湮]l]pMo| Ҋ?\`_SYG( "P]5El*f 7b,l;A&aMDxmSO/:낮ƴ.|~ pѪ&&_-ӗh堨~Pl2HPm}ϭJj -0agz)={y9kQ ~"d.~\{}mo2eUؕ#&WQ.$Kf Fj0jV;ł/ML[~Z/~V nBJm`Bene 5U^*U&t05Q[B )߇kvU WG-(* ;@Tf7XfuBto2S Eeb\(Oy4ițF.ō$ &*R"@њPH@'$Zƴ+bFT DCij#)eFp;IFL18C' =Q\28=H1Yh<MD4"來a`S n1T",/oy|ZHLHb|Om$eC#JI4Lk$buG0tp&;汈RO 6G?~g!gT͐ hOo$\2/leRT~m5(V+jʄ5Pکq&ZSrF *o5( ~XnN6P(\ e2r5()`Y}eԩ$]&8R8sx5xTL Dk&U $[T;TUmS#L| 3ƊmD1J=D39!n6)k%ZVZ3dM# u/C.ȍz{*;!rU%zBCW\X1,4*D81茛z[=vMnF`eCV/BNuIۈR1S5s+K(&Hkd[ !f]c % m 5J1R|kŜv]_R.38g 5cDSB耫"jM0@ kKE !XշRUl;e2qεt?[/I4az44HTE6$2Bk$HYB!AVGHBh N!|)2k2b(H2SH+VR|wf]]\7K6Nm_K؄nQ7&$M98`5ʭ &_.7m 16/FB*m:gׇ0S,0xfTyq%/1 #UzMp09mFVh?S['ѮkgɲʜJ]\f*A֜%.mo%-Llϓ|0Xa*ݙBsa__܆Xg.`s}Hwy{jiXev{V!jSʒE60Lw3w0xml IxC/X2.S`Yp5e|dlGiCMT9'T^z/w<ݺJNϦ70ztJS't'f$/$gK6Bv^#s6[Fk Gpp19*(!pR IZSμIi(gq굵:}s)b A&ڪ1o4s߮'kp&r DY;8uB5 # zncϕc"U JYő߫pXN6B gyЬЖ>L$,gƶ7mܺ.:]7t^\強oc?#5wD)(q&sÄ0z0qf? Fsx*bsC7k0uCFzW\ &dFf//&̭F~ɸm ϮV `0Äe`t2VSN~N>BUVn"%qU ldAd&XN+Sgrnn]1D*~&[.2}đL?%#PU%ÛS >4k!@S*^ ZCW jpqTiYu**6mJFJj֓apVY<*%JQXqZvff*!!e53R4ѕī+#Ni%5B1\ĮEpTJ蚄i`1P>e"$(/t=`0i._m9@I-&ir\Aqoz]ČO?9gLB){ ImYX+ܝGnC`a-G'GErkJ`k[U`h\tE8d.t!ra]E^rF>由{%& ^\1d#-=`ŸDKZ|(wafkf|1AwNl.p7-ۂlraZ)nJtuch١h3K2Yɝf ֊d2bωew'm*4n]97WuK#Nq"MxԲGa> FDUh[C [ kB!J+Yc'佣yWGiȆwf _{'UOweB"Й zh Hwq:7] Z'dۓ8uLq|6O#t_ FcpƉ{U¿? W7 OjsMu>رNLMdz>ħ84-vήi$y #KBЛ)W^񒒛9:-_KFja`҇nKok!3 t{[R Z]noϪ;w.Yԍ{C/$~*xο}W'O+Go: nu=[oNi%Np:z;Rm ο&`EGy|҃s34~oХ>.޼xuAomx]*Wo:ˡ#(Nٛ0+~N>ՇO $*Ygdznwi߫܉Tӎ~ &äSe|r|=p|v~h- ’4ο/EƱ8Uƴ<:I?h<fu%B&}ĀF49u{fe#Pk)#8|gjԥfJx*# z}kGSiFq<,sUߔ/)r{ɒ>ZC?p;ǭcg8o\TAG P34`n\G{uHbIsFS.JIO@Rc{$qWS53" 6 논Xw/x&,82BzJ"9T<Sci'6#+[s~ ;kű19pܹd:^{zh]}yL{!w)92O>@־E{ho[p MUxTEf9 T*s CˤyEYHh:CmVt[ v2;EP⚿"y?5?WAO@¹b~Ly(cp8FϧRz K`>۷:*:rD19 ?N^^x3)wiُ3˒|ԑFOAHbgG th%/ a); )/=i64K0%O /|d/Oj !ڒO#ڞ 'ȏlS§5lYCg52 ݘd Ӄ82Gi20yR*G8wzMo߂?P,Is_ x,T|r0WqRy}zݿ}9%NPىX+3~A ;avn'|=e9]T_Py}^{9w}) ~ Tr"2?èytR3G#"~1=$DZ3)4œ<8%O(?%y7Vxq /^xRtXR6^5O:V?< A^6f4C[ \4b{g>%z!OxY=U"Pě~ P8௬K@5'BH`dlĭ_|}\ _N$<,e_!젾8$sxs};. EH?gkdx;9|. ҽNixAG0v_&Zdh*2I!$RӚJ8#gN8p^Qh4)ӸR*sDz$q(п{70B~hҜA#AhW/b?b(FCq|CHx<-?H8 Ä"}.w |c $XQ &Hh;&r*+p`)@Xxx(`nBp .hbNQ8Ɇq"L7Dz6j?b<"X܏5~S@|A{1[=vS/{/|y5kȫWl&E6Yzu0p5~y[U#0߱JL2 'ag8S(Xx0ebO`3 BF 6@ =Sh>ƸMX4}OgF?P T>{%v]`K: {?CŢ{Q {d=uv&ۊ*7l_x7o:6h%X  ۷H8c z,JrSKݻP:sl'ExȠwuiΌFeYA 'J-/PZ*`{?shU(E-B_7D#DY9N|0^crB0.dmN്5= ya8ИB 5n(ʚyIe)ر}ov'ʶ=(طu: ql5c Lx~ ~% `N2o^ " 1e,[1͝ ]rӺd^H|8Um b89бp S.s'77Yp /€G)bt|{"D5xyxFk0R"VNr#ψXN`|Iy{=ˬdD%;.Dq^f[lsqkLfpɲ&MJhg IٖOKT#K<[gݾPcqS3JŌ$wjdi8] IU*dzΏDz>.&]ѕt7몤NG]j%n`q|uI$mr\TW]]jt;ʎc*# DvU/K)ZqRdԎ@ IX_Bl }SVtI+5R74nKpDZltb8NmkYmq jQD IPZйvjītqǭ_\~vjk9*݉7b?&*9d5$#yB>-)_hNEb}uVW0r]VQr˖dj$hQ2%̝O Q›.^I%NlUҭ4O2bZd$Vd3[X$!ⲟ]&L8RY-&R8뻺M5f7&Nm;sbxV?K.USu[Eon몼0j]_V*V̼yGvGMV2sNg<]L(dQ朑G>ԁxQױu*sR?Y&g,Gh+ei-\J\;$I,K3NS'Y"jnQZ|d4CTJM 2XL]!ZDv}NJΪ'R& Bd[0>;3ZV0UMcS /?k1W"-5õҏ9fHKq2!Olen9R[Tds:6of;m&?VKJ`[ÂKrQuӐ;+I.+HY!7dL˛x0og! S#Z-©r* _i/,<5}GdṑVЕpΊgH(Wt* X͆p0XA:5(bGtA(]F&s#)| Ѻ٧XC=/de9n,` C!}6#t#@k׌x~SD"bO t=\?r'e ޴7//.zp_:Y `1ʾIn"ݶH%V A|B Ա$7 BLv02"`l`Am:nyص. 81(2 Ka^`]V7TBܩ1HКԏzj7/ qi"7V7^%HT>PLx - Joz}bS.-wq/ l<具ٴwqz8=e%vV°AvtŲAW -(o  g"'E˖vmwbP=n#ͳ*1]ֱuy/Tt"l:AnBسm0Q7{m\v 屑S>w?Y=5xY7cNJ`wE?[یJL^H[TL أi QN;H? L`G`TzjGYˤz*ݖK}V?vJj,%#\߱+~mˁ>ϒIe?liz.1ioǔ&*izFC=CS .g.~/偉vK7?_wX>/g mMi;4ц?#g};?y)p%\N-yx׎{7g w1]jDL-5j5DX~#??%*Cʧb*O J>X!Lt*İKw8飱F0}o|?9th!/>-L1K>s(y89Bb!3,lc)'i+,M:24=8F?C,AR!1`kGP ϑ*$3ES5NdY < (TAhX 3?ߢow%mJ"f$>b ^fb$,&Mc%d-u.n8A"uMV*bPV}]b6UiidnQZno^7uJ0rx:JPxA q>14BeT~}{* G,L[L8ZHӕB )VV+o=kuZPz:3 tu:!QZ­|衐j~ u2'kQ,3@#Z5%.EQ_s֝e&vsLyFqByġ+Cݢqi~BO`An*D~8Jץ, U6{ur]N}l!ܫ># e]4C.ץq8JЩ8$wyI6(Kcs!\$`\(Vt]B}e[vtGڗ; n6Lʦays>|hT^t:$,Ql~huҟWPUҊlX T P!ytt(5kd(ZdǾ͓Gz1A,FV>6@kH4C(N~|h,]R{0ֻ`= s=0w Dlג]kz` Fǭ'֓W/[5ȶis?þ,`!Fsea)5\(~ v.qS'8,]+ޙH*f(%uH[#s$sTDF)F/MDJP(hAN퐲U)Ne33}0xp#b0Pv26W2wysu9sJ%Z]vE9<8LX=rNh`SNq}uP(Y4uN!Ѫ$H%tw18sON!h:xv->z}z;xu;%E9fc.'Ĝ#F#06FphuY$)FD-\n%>Ke-b5dc.c53H+Y%#1Ƃ"n %Za!}"@JLdXHIxz򢔀G\8Qr3Yq?mDD놐ZSfVSG"0&4B8Q"Q[`EUk(kînTN0xZ&E}D( :2䉧V):_q0f"D,P}9G*WL*MW"<>-(^]^ V¾r|TjHn'O^5 ;]5FY@_PƘ8\$1GT'*kQ-@U21hC8e{P  m`+r%̌gO.9fux׏!m]qn~}7$Yk!wrRvPYf #`2z3w듪f⏟q#M.sKswcO[}I怮q7Q(V3q%SLv)R6%sUטzc Pϸdh }K&傮147/s6L=#Fxu{5ax1lB00icM☉7^=k!5ǵ/CJQX-kŬѵ۲Qv0ep(K&k8G*(8ˤm5|p<#cWMrBG=2M8]fUBn..Ae=vKտz]|ww׫lw`n]G#Ď! >!HXdj%XqJ[ 0;OQJzRiuD P`+edس[& o^erC~n,wf@j2{K;m3ݲx3|O|3|0? 8C01 ;BL TJKlr̔@q9zt)dF|qK(hg:G.HZyPd٤y,4LDOnkG~&׽S^g8?݈XډEKl0`R^Ԟ Gϯe~q۠ǫD&VJ]GFdHD1!LZ jXݫ˘yZȘ{R".]2s2>up$şN\,ZC`?76lWUXnG}!oDG=Ҭd><|K%^snƴpFzν5ЂC<'A4%"_a&Q%U5ԂkeP$ /H ~E:dZ CMd6hpK8-^*: "؞Vd# =|HtLNg"cBb!`_WC5Ei4a׿n=wEx([nUV=}{;O+?0?wRmV2q`7*U(7r1fPW} -ĎKϕ͹*Ih"*85TE1Ϟ "V7#Jl!ť-Y L& "t,WMbN Űr*K q$*<9o(Xf99#9SA.V9 wSV)ȪR4GйZ2en"9hP;)z~j[?qku#'7GmnM?)t/$d3%sfe )N|<9D6M%:vQ{x]^eDu|/|76R]oYgvN1Kf[K!5Q"F6*-yCLED"C(-yH5`9̻3(jؽͩ1v6,WI5E7@t _Gܛ0~ϤE|rZ9:T. \nWq㣺B0go%bL% _C Vg}@ku4 g n13Aڵ #E%` bBE25J S8&Hfj%ڿt͉~\Gd&a>5L,[x<j`+58N̍=v0hQ}.~ݵSeVW9US{ VYP DUںWG.j?uCSf65Cn5ve(Y/:)7_zzo./jE[L?4eӬ@wZm_UXSo\(wv7ϸ~6ȂYژ@ uNڵEr_ҕ̍oAs^nZo/_y^ގZ7Ɲo'u|/~X.o]- x@FMӦ_6Ow{чOI =Ť"&Oe6 Q;ܝ [/A1^uV's:[d8CHjqm.ο5 ڛOnܝG(5d,h?^|ro}胳)/[ۿ^<5n;g_fjBZznu;@Q7AZiunwLc.y{ u&0(\A&xiӽA5B;t{jofH F۷7*$蒠6o)S+nzT;x &ֿ5dj~P /2Z7I'`+,9Ө_e #OBnIGlV:I!BOcw:t0(-g~aF0QKs vUZ\& $޻j`p=d>~M\98Esr6*ypa6*I8$YN&CTgd} XMΰFy"d8K+-RO'l:]DucƘac3HXTm|J5 0GWvᨰ%x Xe..̵3VW*ڨGcXݙ$|܅ *^Mٟ޾|}9 Σ5P鿃Vh d[f \>PO > ]d덏mH/Gvk 5Խڥ뷺vr`3_wSi,Ifo9yeh ȧzEVr'ƙr6!N+r")VrA3 zԲ♶H$QnY[+,`ݛu7xmXxp]sthFA01Bg3-M3'OTrHL$ _fg)'/iaڠ 7KOo|ժPk@6,* R!zQ?k̄!Y pӀOwU}DS!M!VIx>..gi%hIݨUλe N+U&Wk6sW&_ۇs33$4MfrA՝S\jC"Wt(-DRiQ*1ŠqsX@I!"ϔ4J)V d+S'n.ceV,Lj'!,ηA^"@zݰxXϝXꉀAp,Qm_ה!Ɨpp73!55e#_GzqHߏiڏm|L†v&hsiw`x>7=G~@,?V8jn ûFu\Qǻ2x}{7w`KֆUI$Իq\e`:y)_ӛt;G!F+'  *QQ$"CCHFTsE$XC:b3:_8$L'(љupgCOkCx&!=_!ĹO,JK؞ph1QvJ8v ;%3p-&a^ \0C.<(-FqVb1vm4s3Fq'%9Rg9'G{7G;:ZI*9HTW*q %JE asxX**QXABicC}b!RpK[Ϙ4u' Q-Ba8YXcDG,Ib0PQHPL ,B*24ϙuBa1fLףx뿜0! _ܟd/)"L9bJ$b(*(Tױ![e3+bv6c/IjJ`-C,X)QDp,M"Nh0籈CօR9 ]Ds"鑍ܨH-g)zSczcm#j<1E:]Ot\(bwz%6_/{P_XW BoBOU"JK'*Y?.כ1G;Y{"V̳D%YUHk{-};Y~7ԾJg1h:pg~ܼc"{)4K }߲'?A c|Lpk+ŝ%:c]?v Y~T>\,ikٓ~gAg\`R|ŠΨ_{O,gNRb|Ӕ8n<#*J>GTr2Jbb*!Axc9Sw^#;Gz[:D9ȓ{d} a|Ho+ -<Q';*lOwh0Z!g(ͷLh[o[cFƙZXCXs;eքӝZ>VO!* ad`l>>ԏU$*Rhce] M7\UpiIqW"툤D'@- c)I^e&yl}xuלrtK*=n\ɉ@F%}R=S771b QKӌ"oZ-yjZ$]L1B(")EbI XFT&HUbmaH8CiAL&g󡿬sb&+t€fkԹ2nˌŬ<%p l†,MPsrZFwF?GAe{Sw`aȬO8Zn!QO@6wƓ_)$j $0ӠHjgo:KwP; _| %\eavWN`c;4&/!wK ;wGBwpL`֡q ,G=~wYL*oyd0%%I=]Q;Ϧ fr ;uwva"Ylezh/[}L >/Yǂeh⦧ÿzso],8}" F/jz;b Ip@IܥW*38s0d0>l0_~w9wqВVj"BKVyD8W˅w, OT$TSia Xb2jJ mޘo%#w_1 fWaޙw\$xmi b ;W~Q1ԡ9XaWsYٱݹ%_% e(pNt1wQ8rIK5 u De -K{#V؋O"ޱ.ʱ˂vhiS֭:!yIqqU޼Nٛ/,!UvP7O,bOm12RBnV5[IO8qv14":m Ƽ6 ]+>cl ;S@70iKcԯ).UM2Z{=ߎj~v;R^;"∈̴#-bnqi^Mvi H`5%D#|.xv1rgC<0oi0{1f.`aTtx-5fT"dɟ P,ۀD 3}$@vÉKy8mt.8D@[A ) )!E@^aC00$DXQƤ YOE*z _,Cp DXIeF2 `Q(\,< )ۘk 8Ro=Rǀ ?%+N]n*Näb%cSg%?曏X߯:Sm1c0K!IO" BX(0b1( LPW4$RrzK@K%g:^=ON6>N+%&nI3K&  jDCFB (aJ1QQe,*3H ∯;ݼn_ ss0'# 8”,N7'6;8IW4?~ dfPHOW'|\~?)=+8͟??}҆Zov|.`ni|&"Xo^3˝~>?Kf%9|zt7qP&zK,RCOG?fLFfi9u̟7j11ICmbw6|>W~Xʟ<%Ե֤)p |i:yr)hD}7so?ffu3zͦ׿l?&fKub+u:HsM QШU8Dcbn]s|"!31f|׫y|fm-f5@Z%>6jicσyGm y:*X\rRϘwR$87u Ĭ{JkV,gE҃vue|SC'B30r4Zu @K` RGRs`JJ9N{ #.OyoW*YhL\?5`>௿4$9x$dtWWoN FOso]i>gX =dw}A 8 YP:uA`WaP'g'̱z\|~*(Bs)2 S0H"֬9yWSnqg!v Y {괏yPѮrgͥ;Qd^fLubz֙2g٘H65jjbUmUeyl/cڙw\$xsаyP'<ا44@t_Tg+5~UuHK>J`٫5e⃀2递UuPBG"2|ƫxd@XҹQA+WЪ#@#T͑Fvx]EXUZ6jƨR+ѝ%!$'$Em4&9ñ$eݩחq-*.c|FWG\-2[Hj*F3a߆XBҋ0˞mVu.\T`IA;kL(q F;`7-"8w04`Ndnq~|ioFGK($$`2I !a 01I(R!G!1IgzH_!$R}C` ְwST:N~"{8W6+}U}F$&0X$LP1\" E'\diXF"bmj5F 4uK,]}[ uyT~\`.9.J4{MNPSb^uIR2mY9R9hDqbõ͌HLەT `Dgt 3Vt2"RyٷJ>V'2Ă0,PhpRJ\!XL4bF"e,AH L-WJSc(^^$6w^WO.s|Ϟ#:3P?)YoZ:.<姟78$f]mbg飅>>M1Bx_x^fb>f30!,ZfQb|]&Ӄ1_[?tu+>"PĘEtOohkqQtg$ y1+%}\G1MӭslOwI4F-*nF.D?yzrFiM` @}Ȳwww਻Ͳ i4diPlOsw*snGhE}Rd]&?9jy-Ihs݈ȄXQ? E8Ot]fĆaTₓ3g0q#d8۱j{-UI#]W L=#/Q|fg_i6uM(3y0M-UkaN9XQ#S~B"V5"`[J*X'8c:!1# g֢ndxOX σo-9kp5>guNʋ+MA4 k;'Vr>j.ףZGbtW sJ $/y P%Dr>rA%"j9 *$7E 4ZI!8u!XXbzhd`%ZcހlV Rح"0rp;Ml&FƀM19IQ;B@uƔ!UeLIy{ղʑޞJHVV]p x̕^W:vȫ:qS;UBR!ɫ*9=^~0YƀI~xÊbOaSP?\ *?PfP 8VjΒhJTc@[/!a,{A'5gPzqMsօa^Iy Ҵ!Z_/M#=>B.nV۽C)r9ʔ. j-/ubIbRfh<ͻHS7~f>LgDvM 44ʃ^tA'i÷}ӭ['t71C^F)6h]GKu3Z_98N@  i`Wn:6G^$tY=ݍstPQJlGϏmnFOpHgilpbՃSBr1SeH:nB FQAf8qTyf{Qh.g F 25 I[oPKZ^Xv}]A4iJhlI&1LpﴔWpeL2&ā5| ss˃s$ o,Y}׼9uMf25zm).7!#𯘿/>ԐJ^z{̅w<) ]/Y4s_2`c7uތldfTU O5]{hi>82@\Naz4H:(]%:$鄎*Bu?:|_#Us( =>(g'ӑKQa'\3Yqе͸;ϗ= `qwó,/7˗U4ٺ&{ w["fU K/pY*tR)h:MKQ?y~wab%[FN0Ez$ 6rY1_WنUwy׵*X²=ʮlDUV_!eEԿx{oqT#R{|zjսsZ\7Hd Jj9JdIdݖyw~_oːIØVBtD/x uK" Gq3hH0DtpdFd4)NÒ1ԩmi*p(Z̥Z!)V"J3N"m`N=GJ!<0/%c0zjr@]?!u&GcˤF1^e%MILPթS1/cfN]6bC&Q[!F'DRC0щl*JILEL.ːu2tEδ0C;J1v2C,8sLWoI@!Lb*SMKeD*KM(3P46uMIĖҁI Ibsut29r̭39sr Y#J£98_~yyNr*l|$?l C?wL-ti0#G+?_y?6|dţ{&?3v>`y5C{}sĀ: ?=x)c1[?tUzH `Ҽ)o7kV)'|0Mv̦5̎6Jf2wٓhhdkj`TJ"nx$~NYVMBiܮ8^IA8W;5Dse{}0JR#%BbF DXgo.n[,sro%oKA Ϥ-Y͖@}iQћ&EqKg oFNDSfk!@D%HLm ٨6$e^j$8Ӳ8P<IQA-BwBt-+@ Vl¶Q2tcO0RY"O  CKn >.gZO=8WFk6o5>S={;/ -˜b^㆘Z_تuڊ?Gۺeݜ 欎)ENn8GFe rDҽyW% C`t5 Tici ypk_=)օ3ā`]lseIeG|{"|cJ{k[Z::Äl; ={l |Y0#ǘ&n5X>E>hE)?yrFiM? C=,Fwdy[Kys@v?z[-QHO*{xו?9uD#MysK%[(ȄXQQ*O>ɛ~ 03l ڍ~HR_F(+ʷ}3́6ofX1@"|P6Y,JMO2 9&?Y#٭b8T P{Sg% KVQ>EЉwn;O%K/eG x54YLKE \5l{36#м#4d8%%aۭKusNJgzפ|=D_bB>3TQ{_<.tpsR(l csR(K'zpKJSHY(,+j;aNi y wq?j99[T-q/I {";P!{Iaq;Ih:VF8ѺuBOw;TeTN5{;~P7TP`&ڻp\UuY=ݍZ2dT_ǐͮP,!˰+سY=Q7nTʤ"TV8v#V?7 &Ivĉ"\N6 Ƶ_kuYptM>iikGQ0j;{l&%l'j23m fw^JװU^1\iwgj8&7E+A_5neG'(ʅW;[EVfk5ޖG]Y8Q,YPm^[+7, ȿ8{z;‹4o뾝\ u$gq#d8۱j{-]/EěroRu)AQn"]ma/5Jfx^fXM0NY$6as!) GLbj÷.0*uw#sNX σo.pW"q]wކs: QzyFMG`K\[w5c-\@耋@71 zj7D-2_z/K_uTQ@Vq)8QL1Dr8V&J$1@"WXmi|nnj nw? Mı9R#S[S5"K*X'IdzX"xjQS YRqJ{Zi ̠3%0?瑒a*+{#&~j\Nat'1)[IɶAjxcu!NEu]/Vcī4"#WtNi623Y TLyn/L)=Ğ\ܾEhE|") qNsY$T_y~DcU ^/a2Nlfw$B9`J۝ ?]ao!*%|&1iQ,&Tpģޱwe?OlA|[$zm=|)S/x(h F8F!Fn,~TO}8ИjN2g|yaQ Ŀ-[">Mv;lg qk2/Z#in';Ϧl&GTq}[f+DN4wmmy9n΢< &9e3B5vHr&Se-K6ݭK,Hf}U,XD+$&XFFq$SLS2(!HHm \Nz_UKB!Լ&R1ԟZy_Jd3'xVYAtuI$=Oy4N@;ûrPAA tЅҧkvN!e',=0d#Fvxn /'R&1zGSl +Et`B%IVV2 FI8ּc5<{ѷT!{n-b;}e纍nZĖr:rpv\y]ۯq@N4C;p`{.W/@y`'a?(T}6s1ƷjhCdΗ%Ҵ|/&PbFz+$c(ebFhBtJ[e|+0kgoBW%>g+ T`|rPDX%er뤼qkE%1:G~aa 5*0 J+݆!F W'x N`%V%qMz`A+,ހ4%s+Ӈţ{i5yCxҐSn+8o嶶kgvA~;ܢ=y~VAa 0،p7)DL"iEq7sMć܇a&`QS}QP %U`P]TAr&:zªxPȗ|" Lpt/p O Xx$6R!-K)ԔFw+-К@@wH,VUIq+5NDGS$R[DuqZGK> +LS#+H.r2RpZTsJ_,KF͐;nqW< /ϯ/AHJ}k9}t> !'!9#j^RZmb{RXAvTPB)*H5$wZXMtMbK\-H__V$}CVK6/E&> _>)k-(V\l#e,{??WsQ*8Xϫc1I2H]=cl<5w'}_bS {Wdœ~:s훺ڷ==zoNf#j1hG7exNhF5~vh48˩.S.÷]@֍{x)>~U?JoNJȞb}ъQh]qGŭWqz}Xp&X8&Ee^4 uP[lm߷SR 3%FAQ޽kիd-ӗB==˓Lzf1/yF]uf,5LjσX׫xNΆqPayx7}UH.[G͛340pAO@y|821 {C' Mx/ẗ́m#Η֌(ગ&%bԈU4p"@DO:$0oJ$ƴN1A|҆%|PD5n l=+`B%ݜ${۸.vxKw݃ɢ)DIY# AJWe2e[n498! 6Hqb`",Π(Fq)7'4fkFb)#)8H41GaͅJ$}4M1s*!1!OXpM|W770b\~ȷ<$ɾ͇71Jqt=ZvD`NgўW"HѿG#N;D4HV"6 xi۪>5E&%*ш$xY[Y6ݻEVMg23Ys_1Ahsoۛj%7PTj F|S610߮;'Mglҷ3 =h N70Fuタ|{c٠N9vB׶xLϞZtX^Pg$+cz:1s=+<+Y!1sAK08[yuJyHfQφ!)]xG߆vc\B:cnsAgkS[y筷87Ǜ/B:cn"8.Z':j&!) <*F)pch{WV!,"q#=AHk􌁁>&vy2Aio&p uޘ7CNzr0.);yR z%fDowo_o<zמ!_{jB0 A2Ɠ/V/VUL*DW.=\-.K9N^(|\PE>YAaL@uM$FBѮ} |!E\ qʩĊް:""L%Oǡ"BELR) -Ŏ:,4>κ deꯇ X\gqk,޼ۃ@&{oZ7MYwUr)R[L4u8@?Ԓd,ΥV-I-؞':aP,X& a2:N8ib$CKP1w&a`wB Aj:]0AU(u:IhHQÈ)>|onݻeezK=Xr+h;N<'*."Pn"x[P5IЏ 'Z 5™gGGy5 eՒ4ҨdY؃/Oi4FMYlʵJ.W h)qDdBd{͖Gv:N@yV(T ,Lˆhcb~r? cF}W*C5#?K?=j`nB%t՜hI[e i 12826ms (00:07:56.068) Jan 26 00:07:56 crc kubenswrapper[4697]: Trace[1853114698]: [12.826876176s] [12.826876176s] END Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.068430 4697 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.068458 4697 trace.go:236] Trace[427781933]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:07:42.362) (total time: 13705ms): Jan 26 00:07:56 crc kubenswrapper[4697]: Trace[427781933]: ---"Objects listed" error: 13705ms (00:07:56.068) Jan 26 00:07:56 crc kubenswrapper[4697]: Trace[427781933]: [13.705585798s] [13.705585798s] END Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.068476 4697 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.069600 4697 trace.go:236] Trace[751802574]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:07:42.733) (total time: 13336ms): Jan 26 00:07:56 crc kubenswrapper[4697]: Trace[751802574]: ---"Objects listed" error: 13336ms (00:07:56.069) Jan 26 00:07:56 crc kubenswrapper[4697]: Trace[751802574]: [13.336365448s] [13.336365448s] END Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.069629 4697 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.071434 4697 trace.go:236] Trace[2041769970]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 00:07:42.052) (total time: 14019ms): Jan 26 00:07:56 crc kubenswrapper[4697]: Trace[2041769970]: ---"Objects listed" error: 14019ms (00:07:56.071) Jan 26 00:07:56 crc kubenswrapper[4697]: Trace[2041769970]: [14.019169438s] [14.019169438s] END Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.071461 4697 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.072613 4697 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.389706 4697 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36770->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.389756 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36770->192.168.126.11:17697: read: connection reset by peer" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.389706 4697 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36784->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.389889 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36784->192.168.126.11:17697: read: connection reset by peer" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.390446 4697 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.390483 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.390711 4697 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.390742 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.594556 4697 apiserver.go:52] "Watching apiserver" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.597802 4697 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.598100 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.598461 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.598546 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.598688 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.598751 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.598889 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.599032 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.599061 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.599310 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.599357 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.600540 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.601928 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.602895 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.603295 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.603413 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.603533 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.603751 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.603779 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.603937 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.605825 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 05:22:11.4253493 +0000 UTC Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.629883 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.646522 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.658097 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.671765 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.686627 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.695582 4697 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.698539 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.727762 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.760914 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.762416 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928" exitCode=255 Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.762453 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928"} Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.771694 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.772413 4697 scope.go:117] "RemoveContainer" containerID="173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.772437 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777134 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777164 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777184 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777203 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777222 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777237 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777253 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777268 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777283 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777300 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777317 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777333 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777348 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777364 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777380 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777395 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777411 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777431 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777447 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777464 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777479 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777494 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777509 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777524 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777542 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777582 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777598 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777615 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777633 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777650 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777667 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777682 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777698 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777716 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777741 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777762 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777778 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777795 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777812 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777835 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777865 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777896 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777938 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777953 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777968 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.777983 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778014 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778031 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778048 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778065 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778086 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778097 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778142 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778183 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778202 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778219 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778243 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778254 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778291 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778313 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778297 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778332 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778351 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778369 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778386 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778404 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778423 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778439 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778451 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778459 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778483 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778585 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778604 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778614 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778638 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778657 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778674 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778690 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778724 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778742 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778761 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778769 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778794 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778811 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778827 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778846 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778857 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778884 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778967 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.778986 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779004 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779036 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779053 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779087 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779105 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779120 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779135 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779169 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779186 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779203 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779219 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779254 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779271 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779288 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779322 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779341 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779354 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779362 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779378 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779414 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779415 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779431 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779536 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779578 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779621 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779662 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779701 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779708 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779740 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779779 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779817 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779855 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779892 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779929 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779964 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780000 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780037 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780100 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780145 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780202 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780237 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780273 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780305 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780339 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780372 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780506 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780545 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780581 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780617 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780656 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780689 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780721 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780754 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780788 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780821 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780865 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780901 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780936 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780972 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781005 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781038 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781116 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781154 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781190 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781224 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781260 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781293 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781328 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781363 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781415 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781451 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781490 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781530 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781609 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781645 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781678 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781713 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781746 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781779 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781820 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781855 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781880 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781912 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.782587 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.782865 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.782908 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.782946 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.782981 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783015 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783044 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783097 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783128 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783157 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783184 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783209 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783239 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783269 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783296 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783322 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783348 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783379 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783405 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783428 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783455 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783478 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783504 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783537 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783563 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783591 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783616 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783642 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783666 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783691 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783720 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783748 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783774 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783801 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783829 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783895 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783934 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783966 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783999 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784036 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784063 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784111 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784142 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784174 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784204 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784232 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784259 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784284 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784313 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784372 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784390 4697 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784405 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784421 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784436 4697 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784454 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784471 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784487 4697 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784502 4697 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784518 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784534 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784550 4697 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.784674 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.791004 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.791910 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779747 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.779691 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780024 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780608 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780908 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.780962 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781178 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781205 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781239 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781318 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781416 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781461 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.781730 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783245 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783762 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783851 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.783870 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.785290 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.785815 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.786252 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.786861 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.787114 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.788701 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.789142 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.789588 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.790684 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792008 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792037 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792265 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792514 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.792545 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:07:57.292519579 +0000 UTC m=+18.929297159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794329 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794358 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794374 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794530 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794583 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794575 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792857 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792890 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792964 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.793178 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794873 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.793414 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.795809 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.793282 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794884 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.794973 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.795016 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.795037 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.795442 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.795483 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.795913 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.795943 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796518 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796105 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796550 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796137 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796278 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796590 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796873 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796915 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.796951 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.792699 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.797941 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.798025 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:57.298004498 +0000 UTC m=+18.934781898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.799894 4697 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.800327 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.800569 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.800589 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.801267 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.801623 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.801691 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.801894 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.801936 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.802089 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.802280 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.802466 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.802475 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.802988 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.803033 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.803187 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:57.303148528 +0000 UTC m=+18.939925958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.803913 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.803955 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.804386 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.804480 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.804717 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.805411 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.806271 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.807274 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.807769 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.808465 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.809284 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.809835 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.810180 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.810337 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.810636 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.811100 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.811467 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.811514 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.812258 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.812280 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.812678 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.812796 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.813660 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.817329 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.817807 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.818129 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.819131 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.819177 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.819432 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.819639 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.820495 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.820530 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.820546 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.820644 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:57.320619911 +0000 UTC m=+18.957397511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.821618 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.821807 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.821878 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.821915 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.822011 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.822620 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.822876 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.822926 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.822943 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.823161 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:57.32313727 +0000 UTC m=+18.959914670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.823156 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.823737 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.823790 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.824145 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.824211 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.824356 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.824621 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.824685 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.824806 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.824875 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.825110 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.825136 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.825983 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.826013 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.826422 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.826502 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.826691 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.827168 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.827520 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.827674 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.827689 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.827252 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.827804 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.828269 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.828662 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.829243 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.829277 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.829660 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.829863 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.830013 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.831208 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.831247 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.831749 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.831878 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.831992 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.833246 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.833288 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.833378 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.833434 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.833631 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.833643 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.834631 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.834769 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.834810 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.835433 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.835449 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.835773 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.835803 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.835933 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.836207 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.836773 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.837037 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.836443 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839223 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839244 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839299 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839415 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839520 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839629 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839852 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.839905 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.840228 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.840326 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.840827 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.840978 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.844895 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.845181 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.846741 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.847464 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.847846 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.848009 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.848655 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.849106 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.852453 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.849460 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.852840 4697 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.852885 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.852911 4697 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.852915 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.854094 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.854122 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.854130 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.855019 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.855039 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:56Z","lastTransitionTime":"2026-01-26T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.871316 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.874198 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.874311 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.874430 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.876121 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.879637 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.879689 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.879702 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.879724 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.879744 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:56Z","lastTransitionTime":"2026-01-26T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.885832 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.885870 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.885955 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.885972 4697 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.885985 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.885996 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886009 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886022 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886034 4697 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886046 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886058 4697 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886086 4697 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886100 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886112 4697 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886123 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886134 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886146 4697 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886158 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886170 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886156 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886185 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886272 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886291 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886307 4697 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886320 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886336 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886354 4697 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886370 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886388 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886403 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886418 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886433 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886447 4697 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886461 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886473 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886486 4697 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886499 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886124 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886514 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886576 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886593 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886606 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886619 4697 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886631 4697 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886643 4697 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886654 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886666 4697 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886678 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886688 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886700 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886712 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886723 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886735 4697 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886746 4697 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886757 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886768 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886778 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886798 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886809 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886820 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886831 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886842 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886853 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886864 4697 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886874 4697 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886884 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886896 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886907 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886920 4697 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886931 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886942 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886954 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886965 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886976 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.886988 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887000 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887013 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887026 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887038 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887049 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887061 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887089 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887104 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887115 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887155 4697 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887169 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887182 4697 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887193 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887205 4697 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887220 4697 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887231 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887257 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887271 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887284 4697 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887296 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887308 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887323 4697 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887335 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887350 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887363 4697 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887374 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887387 4697 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887399 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887411 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887424 4697 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887435 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887448 4697 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887459 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887472 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887484 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887496 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887557 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887571 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887711 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887799 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887814 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887846 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887882 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887895 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887907 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887918 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887956 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887968 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887982 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.887993 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888005 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888016 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888028 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888039 4697 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888052 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888064 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888112 4697 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888125 4697 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888138 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888150 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888162 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888175 4697 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888187 4697 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888199 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888211 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888223 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888236 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888254 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888265 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888277 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888289 4697 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888301 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888313 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888324 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888334 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888346 4697 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888357 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888369 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888382 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888394 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888439 4697 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888454 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888464 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888508 4697 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888521 4697 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888560 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888576 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888589 4697 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888602 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888614 4697 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888626 4697 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888637 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888649 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888661 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888673 4697 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888685 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888698 4697 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888709 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888721 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888733 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888744 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888756 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888768 4697 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888779 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888790 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888803 4697 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888815 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888827 4697 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888841 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.888853 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.891453 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.895146 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.895183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.895195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.895216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.895231 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:56Z","lastTransitionTime":"2026-01-26T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.904697 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.908252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.908299 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.908308 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.908323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.908333 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:56Z","lastTransitionTime":"2026-01-26T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.913805 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.919252 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.923249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.923494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.923560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.923688 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.923727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.924053 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:56Z","lastTransitionTime":"2026-01-26T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:56 crc kubenswrapper[4697]: W0126 00:07:56.931159 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5c53d285c61d68a3930e248231b8ad13e0854767c2ca6e43951ce16569e241c4 WatchSource:0}: Error finding container 5c53d285c61d68a3930e248231b8ad13e0854767c2ca6e43951ce16569e241c4: Status 404 returned error can't find the container with id 5c53d285c61d68a3930e248231b8ad13e0854767c2ca6e43951ce16569e241c4 Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.931629 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 00:07:56 crc kubenswrapper[4697]: W0126 00:07:56.931871 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-cf21d99dc4c062b9d60b08b9b5aa6388fe21ee93bd50e00280a4a79b73b3bacb WatchSource:0}: Error finding container cf21d99dc4c062b9d60b08b9b5aa6388fe21ee93bd50e00280a4a79b73b3bacb: Status 404 returned error can't find the container with id cf21d99dc4c062b9d60b08b9b5aa6388fe21ee93bd50e00280a4a79b73b3bacb Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.935179 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 00:07:56 crc kubenswrapper[4697]: E0126 00:07:56.935430 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.939222 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.939256 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.939265 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.939282 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:56 crc kubenswrapper[4697]: I0126 00:07:56.939295 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:56Z","lastTransitionTime":"2026-01-26T00:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:56 crc kubenswrapper[4697]: W0126 00:07:56.957105 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-0034074755b2615659e711933aed61962e32551d37a7ed0d3086a3535db29f8e WatchSource:0}: Error finding container 0034074755b2615659e711933aed61962e32551d37a7ed0d3086a3535db29f8e: Status 404 returned error can't find the container with id 0034074755b2615659e711933aed61962e32551d37a7ed0d3086a3535db29f8e Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.042871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.042933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.042947 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.042971 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.042986 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.145963 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.146006 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.146018 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.146035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.146048 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.248732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.248786 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.248804 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.248821 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.248833 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.293224 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.293464 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:07:58.293428832 +0000 UTC m=+19.930206242 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.351671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.351721 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.351732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.351749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.351761 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.394271 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.394310 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.394328 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.394345 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394448 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394465 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394474 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394523 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:58.394510392 +0000 UTC m=+20.031287782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394528 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394660 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:58.394632156 +0000 UTC m=+20.031409546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394672 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394822 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:58.39479069 +0000 UTC m=+20.031568120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394543 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394883 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394909 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:57 crc kubenswrapper[4697]: E0126 00:07:57.394992 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:07:58.394967615 +0000 UTC m=+20.031745145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.454611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.454653 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.454663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.454681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.454690 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.558931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.558977 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.558993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.559017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.559034 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.605931 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:32:52.74082229 +0000 UTC Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.661868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.661900 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.661916 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.661934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.661947 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.764123 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.764163 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.764178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.764198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.764212 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.770193 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.772316 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.772522 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.775098 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.775145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.775158 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0034074755b2615659e711933aed61962e32551d37a7ed0d3086a3535db29f8e"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.776335 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5c53d285c61d68a3930e248231b8ad13e0854767c2ca6e43951ce16569e241c4"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.777973 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.778005 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cf21d99dc4c062b9d60b08b9b5aa6388fe21ee93bd50e00280a4a79b73b3bacb"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.794219 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.808517 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.825333 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.842284 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.860452 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.866423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.866451 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.866459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.866476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.866487 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.872719 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.883652 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.894957 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.908054 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.920780 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.936028 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.950982 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.964540 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.968389 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.968414 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.968423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.968436 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.968446 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:57Z","lastTransitionTime":"2026-01-26T00:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:57 crc kubenswrapper[4697]: I0126 00:07:57.978194 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:57Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.071009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.071054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.071064 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.071096 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.071108 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.173285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.173325 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.173334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.173349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.173359 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.275328 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.275369 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.275381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.275397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.275409 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.301861 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.302110 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:08:00.3020489 +0000 UTC m=+21.938826290 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.377626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.377663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.377672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.377685 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.377697 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.403248 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.403301 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.403327 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.403351 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403421 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403462 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403491 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:00.40347271 +0000 UTC m=+22.040250100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403463 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403511 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:00.403500451 +0000 UTC m=+22.040277861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403517 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403510 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403575 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403591 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403669 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:00.403647005 +0000 UTC m=+22.040424395 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403532 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.403727 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:00.403721097 +0000 UTC m=+22.040498487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.479836 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.479871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.479879 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.479893 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.479903 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.520310 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.523688 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.527796 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.534745 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.546894 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.558644 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.575975 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.581413 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.581450 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.581464 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.581481 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.581492 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.590468 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.605053 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.606131 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:58:34.948301373 +0000 UTC Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.620220 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.633307 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.643965 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.654412 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.660364 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.660376 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.660427 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.660519 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.660570 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:07:58 crc kubenswrapper[4697]: E0126 00:07:58.660637 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.664577 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.665333 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.666480 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.667205 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.667753 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.667954 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.668541 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.669216 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.669814 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.670544 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.671121 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.671689 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.672929 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.674365 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.675188 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.675795 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.676367 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.676994 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.677397 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.677977 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.678542 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.679028 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.679621 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.680042 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.680190 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.680785 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.681270 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.681886 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.682690 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.684129 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.684179 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.684190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.684207 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.684217 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.684247 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.685012 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.686155 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.686716 4697 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.686815 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.688908 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.689511 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.690128 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.691953 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.692956 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.693566 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.694723 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.694821 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.695591 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.696636 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.697464 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.698668 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.699853 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.700427 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.701056 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.702024 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.703502 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.704662 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.705376 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.706285 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.706821 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.707916 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.708423 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.708838 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.724417 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.742764 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.757620 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.773210 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.787178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.787217 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.787227 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.787242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.787253 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.787778 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.801324 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.816811 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.832742 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.847540 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.891616 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.891683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.891696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.891714 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.891725 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.994371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.994414 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.994424 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.994441 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:58 crc kubenswrapper[4697]: I0126 00:07:58.994453 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:58Z","lastTransitionTime":"2026-01-26T00:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.097489 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.097534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.097548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.097604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.097618 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.200816 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.200868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.200881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.200898 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.200910 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.303450 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.303488 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.303496 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.303510 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.303519 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.405928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.405983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.405995 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.406012 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.406027 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.508259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.508299 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.508308 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.508321 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.508330 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.606990 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:28:15.567119779 +0000 UTC Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.610541 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.610573 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.610583 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.610599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.610610 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.713198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.713238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.713265 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.713280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.713290 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.784435 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.799094 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.814767 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.815036 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.815061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.815089 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.815356 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.815399 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.827161 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.841240 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.853819 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.865868 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.878048 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.894752 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:07:59Z is after 2025-08-24T17:21:41Z" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.917690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.917944 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.918016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.918105 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:07:59 crc kubenswrapper[4697]: I0126 00:07:59.918176 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:07:59Z","lastTransitionTime":"2026-01-26T00:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.020193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.020256 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.020272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.020295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.020312 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.122457 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.122516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.122526 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.122544 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.122557 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.225294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.225360 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.225371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.225393 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.225408 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.317755 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.317983 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:08:04.317956601 +0000 UTC m=+25.954733991 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.328226 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.328285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.328297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.328319 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.328334 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.418779 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.418849 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.418883 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.418916 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.418999 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419049 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419133 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419149 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419197 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419099 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419376 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419396 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419116 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:04.419088033 +0000 UTC m=+26.055865423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419463 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:04.419430172 +0000 UTC m=+26.056207702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419481 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:04.419474273 +0000 UTC m=+26.056251873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.419500 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:04.419490844 +0000 UTC m=+26.056268444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.432533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.432595 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.432607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.432626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.432644 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.535320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.535374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.535383 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.535406 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.535420 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.609738 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 04:55:03.182206058 +0000 UTC Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.638623 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.638677 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.638691 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.638712 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.638731 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.660117 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.660141 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.660329 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.660143 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.660413 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:00 crc kubenswrapper[4697]: E0126 00:08:00.660487 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.686123 4697 csr.go:261] certificate signing request csr-b4699 is approved, waiting to be issued Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.728919 4697 csr.go:257] certificate signing request csr-b4699 is issued Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.742302 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.742359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.742371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.742390 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.742404 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.844957 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.844999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.845010 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.845028 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.845042 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.902647 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6ddpl"] Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.903043 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bgwmq"] Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.903198 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.903302 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.905102 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.905514 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.905640 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.905893 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.906013 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.906200 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.906328 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.919695 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:00Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.925413 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f2134310-ccdf-4e23-bb12-123af52cc758-serviceca\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.925550 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4b26ea82-1613-4153-8587-2e598acccba0-hosts-file\") pod \"node-resolver-6ddpl\" (UID: \"4b26ea82-1613-4153-8587-2e598acccba0\") " pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.925584 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtgsn\" (UniqueName: \"kubernetes.io/projected/4b26ea82-1613-4153-8587-2e598acccba0-kube-api-access-vtgsn\") pod \"node-resolver-6ddpl\" (UID: \"4b26ea82-1613-4153-8587-2e598acccba0\") " pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.925628 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2134310-ccdf-4e23-bb12-123af52cc758-host\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.925658 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmzf\" (UniqueName: \"kubernetes.io/projected/f2134310-ccdf-4e23-bb12-123af52cc758-kube-api-access-lxmzf\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.948156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.948220 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.948237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.948258 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.948281 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:00Z","lastTransitionTime":"2026-01-26T00:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.953730 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:00Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:00 crc kubenswrapper[4697]: I0126 00:08:00.992576 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:00Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.026830 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4b26ea82-1613-4153-8587-2e598acccba0-hosts-file\") pod \"node-resolver-6ddpl\" (UID: \"4b26ea82-1613-4153-8587-2e598acccba0\") " pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.027224 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxmzf\" (UniqueName: \"kubernetes.io/projected/f2134310-ccdf-4e23-bb12-123af52cc758-kube-api-access-lxmzf\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.027385 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtgsn\" (UniqueName: \"kubernetes.io/projected/4b26ea82-1613-4153-8587-2e598acccba0-kube-api-access-vtgsn\") pod \"node-resolver-6ddpl\" (UID: \"4b26ea82-1613-4153-8587-2e598acccba0\") " pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.027478 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2134310-ccdf-4e23-bb12-123af52cc758-host\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.027618 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f2134310-ccdf-4e23-bb12-123af52cc758-host\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.027063 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4b26ea82-1613-4153-8587-2e598acccba0-hosts-file\") pod \"node-resolver-6ddpl\" (UID: \"4b26ea82-1613-4153-8587-2e598acccba0\") " pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.027643 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f2134310-ccdf-4e23-bb12-123af52cc758-serviceca\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.030095 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f2134310-ccdf-4e23-bb12-123af52cc758-serviceca\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.049044 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.050463 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.050514 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.050525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.050542 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.050553 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.058826 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtgsn\" (UniqueName: \"kubernetes.io/projected/4b26ea82-1613-4153-8587-2e598acccba0-kube-api-access-vtgsn\") pod \"node-resolver-6ddpl\" (UID: \"4b26ea82-1613-4153-8587-2e598acccba0\") " pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.065982 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.070193 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxmzf\" (UniqueName: \"kubernetes.io/projected/f2134310-ccdf-4e23-bb12-123af52cc758-kube-api-access-lxmzf\") pod \"node-ca-bgwmq\" (UID: \"f2134310-ccdf-4e23-bb12-123af52cc758\") " pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.082434 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.101267 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.119449 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.132493 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.148743 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.153327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.153364 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.153378 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.153399 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.153416 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.161455 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.173789 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.192090 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.209312 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.222218 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6ddpl" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.228399 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bgwmq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.234453 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.236744 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b26ea82_1613_4153_8587_2e598acccba0.slice/crio-21a69faa03b71e693df17adc253be2683f831738a0a17f5b62b207c9ae5259d8 WatchSource:0}: Error finding container 21a69faa03b71e693df17adc253be2683f831738a0a17f5b62b207c9ae5259d8: Status 404 returned error can't find the container with id 21a69faa03b71e693df17adc253be2683f831738a0a17f5b62b207c9ae5259d8 Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.252946 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.257949 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.257990 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.258001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.258020 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.258032 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.265357 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.279733 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.295798 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.361925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.361964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.361975 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.361992 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.362002 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.464475 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.464544 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.464558 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.464580 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.464592 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.567160 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.567202 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.567213 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.567229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.567239 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.610514 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:44:23.277194632 +0000 UTC Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.668984 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.669014 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.669024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.669038 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.669047 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.711307 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h7x5s"] Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.712104 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-bjlq7"] Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.712324 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.712350 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.714170 4697 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.714208 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.714397 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-sb8k8"] Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.714954 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.715623 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mb5j7"] Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.715856 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.722377 4697 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.722424 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.722497 4697 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.722582 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723462 4697 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723498 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723468 4697 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723542 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723567 4697 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723594 4697 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723610 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723614 4697 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723617 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723632 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723640 4697 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723571 4697 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723658 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723661 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723681 4697 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723694 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723723 4697 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723734 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.723756 4697 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.723771 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.723826 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 00:08:01 crc kubenswrapper[4697]: W0126 00:08:01.724186 4697 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.724194 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 00:08:01 crc kubenswrapper[4697]: E0126 00:08:01.724206 4697 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.724580 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.726057 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.726231 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.730398 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 00:03:00 +0000 UTC, rotation deadline is 2026-12-12 16:17:59.759947741 +0000 UTC Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.730442 4697 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7696h9m58.029507551s for next certificate rotation Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.738971 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-var-lib-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739005 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-log-socket\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739023 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-daemon-config\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739042 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-kubelet\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739088 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739110 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cnibin\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739132 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-socket-dir-parent\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739147 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-k8s-cni-cncf-io\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739163 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-cni-bin\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739180 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-conf-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739194 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-multus-certs\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739208 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cnibin\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739225 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-os-release\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739241 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-netns\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739259 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-slash\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739272 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-netd\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739288 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-cni-multus\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739302 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbsfg\" (UniqueName: \"kubernetes.io/projected/638a78f4-bdb3-4d78-8faf-b4bc299717d2-kube-api-access-rbsfg\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739319 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-binary-copy\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739334 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739348 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-bin\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739362 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-env-overrides\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739379 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-systemd-units\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739393 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739420 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-systemd\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739436 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-hostroot\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739451 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-etc-kubernetes\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739466 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-netns\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739481 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-node-log\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739497 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtbg8\" (UniqueName: \"kubernetes.io/projected/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-kube-api-access-vtbg8\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739514 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cni-binary-copy\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739531 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739546 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d3adb1-27d5-4fa0-a85e-35000080ac39-mcd-auth-proxy-config\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739561 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-etc-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739577 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-system-cni-dir\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739592 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2d3adb1-27d5-4fa0-a85e-35000080ac39-rootfs\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739607 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-kubelet\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739624 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2d3adb1-27d5-4fa0-a85e-35000080ac39-proxy-tls\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739641 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-script-lib\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739656 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-ovn\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739670 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-cni-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739686 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9ws\" (UniqueName: \"kubernetes.io/projected/e2d3adb1-27d5-4fa0-a85e-35000080ac39-kube-api-access-4g9ws\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739701 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-os-release\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739716 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h725n\" (UniqueName: \"kubernetes.io/projected/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-kube-api-access-h725n\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739736 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739753 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739768 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-system-cni-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.739781 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.740227 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.762028 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.771269 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.771307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.771318 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.771340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.771356 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.777041 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.792143 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6ddpl" event={"ID":"4b26ea82-1613-4153-8587-2e598acccba0","Type":"ContainerStarted","Data":"85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.792225 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6ddpl" event={"ID":"4b26ea82-1613-4153-8587-2e598acccba0","Type":"ContainerStarted","Data":"21a69faa03b71e693df17adc253be2683f831738a0a17f5b62b207c9ae5259d8"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.793277 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bgwmq" event={"ID":"f2134310-ccdf-4e23-bb12-123af52cc758","Type":"ContainerStarted","Data":"683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.793335 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bgwmq" event={"ID":"f2134310-ccdf-4e23-bb12-123af52cc758","Type":"ContainerStarted","Data":"d19e2e53b56c430c0682ca58c0a0e34e5e713e1eb5e73588eac72e9aadc01c89"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.795199 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.810274 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.828594 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841003 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-netns\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841048 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-node-log\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841084 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtbg8\" (UniqueName: \"kubernetes.io/projected/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-kube-api-access-vtbg8\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841105 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cni-binary-copy\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841125 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841145 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d3adb1-27d5-4fa0-a85e-35000080ac39-mcd-auth-proxy-config\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841169 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-etc-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841187 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-system-cni-dir\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841207 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2d3adb1-27d5-4fa0-a85e-35000080ac39-rootfs\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841224 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-kubelet\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841221 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-node-log\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841244 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2d3adb1-27d5-4fa0-a85e-35000080ac39-proxy-tls\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841322 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-etc-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841318 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e2d3adb1-27d5-4fa0-a85e-35000080ac39-rootfs\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841344 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-script-lib\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841368 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-system-cni-dir\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841379 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-kubelet\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841411 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-ovn\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841142 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-netns\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841384 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-ovn\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841470 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-cni-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841495 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9ws\" (UniqueName: \"kubernetes.io/projected/e2d3adb1-27d5-4fa0-a85e-35000080ac39-kube-api-access-4g9ws\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841517 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-os-release\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841539 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h725n\" (UniqueName: \"kubernetes.io/projected/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-kube-api-access-h725n\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841576 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841598 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841613 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-os-release\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841621 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-system-cni-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841668 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-system-cni-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841679 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841699 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-cni-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841709 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841716 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-var-lib-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841753 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-log-socket\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841783 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-daemon-config\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841788 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841757 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-var-lib-openvswitch\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841810 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-kubelet\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841822 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-log-socket\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841849 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841866 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-kubelet\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841896 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cnibin\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841922 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-socket-dir-parent\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841935 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cnibin\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841942 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-k8s-cni-cncf-io\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841964 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-k8s-cni-cncf-io\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.841980 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-cni-bin\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842001 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-socket-dir-parent\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842037 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-conf-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842055 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-multus-certs\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842085 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-conf-dir\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842092 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-tuning-conf-dir\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842102 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cnibin\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842108 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-cni-bin\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842097 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-multus-certs\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842204 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cnibin\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842323 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e2d3adb1-27d5-4fa0-a85e-35000080ac39-mcd-auth-proxy-config\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842320 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-os-release\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842405 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-netns\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842417 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-os-release\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842350 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842467 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-slash\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842487 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-run-netns\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842502 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-netd\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842528 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-slash\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842535 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-cni-multus\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842566 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-netd\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842566 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbsfg\" (UniqueName: \"kubernetes.io/projected/638a78f4-bdb3-4d78-8faf-b4bc299717d2-kube-api-access-rbsfg\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842609 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-binary-copy\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842662 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842681 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-bin\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842700 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-env-overrides\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842720 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-systemd-units\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842736 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842777 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-systemd\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842796 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-hostroot\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842813 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-etc-kubernetes\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.842864 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-etc-kubernetes\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.843131 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-host-var-lib-cni-multus\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.843249 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-systemd-units\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.843296 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-ovn-kubernetes\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.843335 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-systemd\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.843383 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/638a78f4-bdb3-4d78-8faf-b4bc299717d2-hostroot\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.843399 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-bin\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.845414 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e2d3adb1-27d5-4fa0-a85e-35000080ac39-proxy-tls\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.845840 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.857302 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.870605 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.873459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.873489 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.873501 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.873517 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.873527 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.883029 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.901355 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.917345 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.945124 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.964900 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.975386 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.975416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.975426 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.975444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.975453 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:01Z","lastTransitionTime":"2026-01-26T00:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.977961 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:01 crc kubenswrapper[4697]: I0126 00:08:01.991175 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:01Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.003648 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.013455 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.025141 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.040113 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.053141 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.065143 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.075132 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.077707 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.077754 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.077769 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.077786 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.077798 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.088678 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.099507 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.110900 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:02Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.180065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.180380 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.180533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.180649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.180834 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.283756 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.283800 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.283810 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.283826 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.283842 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.386753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.386819 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.386832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.386853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.386866 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.489171 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.489215 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.489224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.489238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.489246 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.552060 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.592139 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.592447 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.592495 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.592512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.592523 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.610714 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:26:26.619206361 +0000 UTC Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.621607 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.624454 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-env-overrides\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.645681 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.660314 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.660365 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.660396 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.660511 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.660634 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.660702 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.699507 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.699874 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.699943 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.700107 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.700192 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.722190 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.733391 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-script-lib\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.752270 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.803293 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.803340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.803357 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.803378 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.803394 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.841608 4697 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.841740 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cni-binary-copy podName:638a78f4-bdb3-4d78-8faf-b4bc299717d2 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.341713062 +0000 UTC m=+24.978490452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cni-binary-copy") pod "multus-bjlq7" (UID: "638a78f4-bdb3-4d78-8faf-b4bc299717d2") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.841990 4697 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.842041 4697 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.842099 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert podName:9b97fcec-14c2-49b1-bdc5-762e1b42d7a4 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.342063002 +0000 UTC m=+24.978840392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert") pod "ovnkube-node-h7x5s" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4") : failed to sync secret cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.842293 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-daemon-config podName:638a78f4-bdb3-4d78-8faf-b4bc299717d2 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.342266277 +0000 UTC m=+24.979043667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-daemon-config") pod "multus-bjlq7" (UID: "638a78f4-bdb3-4d78-8faf-b4bc299717d2") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.843765 4697 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.843777 4697 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.843823 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-binary-copy podName:b6e81e9c-cc13-478a-91ce-6ad9d9c7d716 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.343812739 +0000 UTC m=+24.980590129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-binary-copy") pod "multus-additional-cni-plugins-sb8k8" (UID: "b6e81e9c-cc13-478a-91ce-6ad9d9c7d716") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.843849 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config podName:9b97fcec-14c2-49b1-bdc5-762e1b42d7a4 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.34383178 +0000 UTC m=+24.980609260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config") pod "ovnkube-node-h7x5s" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.856132 4697 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.856126 4697 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.856193 4697 projected.go:194] Error preparing data for projected volume kube-api-access-4g9ws for pod openshift-machine-config-operator/machine-config-daemon-mb5j7: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.856246 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2d3adb1-27d5-4fa0-a85e-35000080ac39-kube-api-access-4g9ws podName:e2d3adb1-27d5-4fa0-a85e-35000080ac39 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.356230776 +0000 UTC m=+24.993008166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4g9ws" (UniqueName: "kubernetes.io/projected/e2d3adb1-27d5-4fa0-a85e-35000080ac39-kube-api-access-4g9ws") pod "machine-config-daemon-mb5j7" (UID: "e2d3adb1-27d5-4fa0-a85e-35000080ac39") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: E0126 00:08:02.858292 4697 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.882625 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.893832 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.905666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.905703 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.905712 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.905725 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:02 crc kubenswrapper[4697]: I0126 00:08:02.905736 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:02Z","lastTransitionTime":"2026-01-26T00:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.008145 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.008193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.008206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.008225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.008236 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.028063 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.077258 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.078174 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.081557 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.111424 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.111763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.111987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.112251 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.112442 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.145688 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.150421 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.160969 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtbg8\" (UniqueName: \"kubernetes.io/projected/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-kube-api-access-vtbg8\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.215247 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.215537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.215673 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.215745 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.215810 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.263210 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 00:08:03 crc kubenswrapper[4697]: E0126 00:08:03.266394 4697 projected.go:194] Error preparing data for projected volume kube-api-access-rbsfg for pod openshift-multus/multus-bjlq7: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:03 crc kubenswrapper[4697]: E0126 00:08:03.266558 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/638a78f4-bdb3-4d78-8faf-b4bc299717d2-kube-api-access-rbsfg podName:638a78f4-bdb3-4d78-8faf-b4bc299717d2 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.76653773 +0000 UTC m=+25.403315120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rbsfg" (UniqueName: "kubernetes.io/projected/638a78f4-bdb3-4d78-8faf-b4bc299717d2-kube-api-access-rbsfg") pod "multus-bjlq7" (UID: "638a78f4-bdb3-4d78-8faf-b4bc299717d2") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:03 crc kubenswrapper[4697]: E0126 00:08:03.268656 4697 projected.go:194] Error preparing data for projected volume kube-api-access-h725n for pod openshift-multus/multus-additional-cni-plugins-sb8k8: failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:03 crc kubenswrapper[4697]: E0126 00:08:03.268753 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-kube-api-access-h725n podName:b6e81e9c-cc13-478a-91ce-6ad9d9c7d716 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:03.76872878 +0000 UTC m=+25.405506280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h725n" (UniqueName: "kubernetes.io/projected/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-kube-api-access-h725n") pod "multus-additional-cni-plugins-sb8k8" (UID: "b6e81e9c-cc13-478a-91ce-6ad9d9c7d716") : failed to sync configmap cache: timed out waiting for the condition Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.318305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.318348 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.318359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.318374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.318383 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.359367 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.359469 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cni-binary-copy\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.359514 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g9ws\" (UniqueName: \"kubernetes.io/projected/e2d3adb1-27d5-4fa0-a85e-35000080ac39-kube-api-access-4g9ws\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.359577 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-daemon-config\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.359628 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.359683 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-binary-copy\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.360545 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.360630 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-cni-binary-copy\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.360591 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-cni-binary-copy\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.360564 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/638a78f4-bdb3-4d78-8faf-b4bc299717d2-multus-daemon-config\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.362965 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert\") pod \"ovnkube-node-h7x5s\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.362999 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g9ws\" (UniqueName: \"kubernetes.io/projected/e2d3adb1-27d5-4fa0-a85e-35000080ac39-kube-api-access-4g9ws\") pod \"machine-config-daemon-mb5j7\" (UID: \"e2d3adb1-27d5-4fa0-a85e-35000080ac39\") " pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.420211 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.420255 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.420265 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.420280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.420291 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.522923 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.522983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.522999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.523023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.523038 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.528495 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:03 crc kubenswrapper[4697]: W0126 00:08:03.543309 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b97fcec_14c2_49b1_bdc5_762e1b42d7a4.slice/crio-054ec7f0069becfbbacc72cfc614b256b40baa10bbcb623248886d1073a64cc0 WatchSource:0}: Error finding container 054ec7f0069becfbbacc72cfc614b256b40baa10bbcb623248886d1073a64cc0: Status 404 returned error can't find the container with id 054ec7f0069becfbbacc72cfc614b256b40baa10bbcb623248886d1073a64cc0 Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.553010 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.611597 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:06:11.301727196 +0000 UTC Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.626110 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.626159 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.626172 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.626190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.626205 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.729553 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.729593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.729605 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.729633 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.729646 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.800340 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2" exitCode=0 Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.800417 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.800747 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"054ec7f0069becfbbacc72cfc614b256b40baa10bbcb623248886d1073a64cc0"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.808810 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.808963 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.809050 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"e88908fa05f6fccb1e658c4539b31f33f76953eabf3fb2430428971bd253cf03"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.821813 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.832597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.832648 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.832658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.832674 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.832686 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.838561 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.853058 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.863980 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h725n\" (UniqueName: \"kubernetes.io/projected/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-kube-api-access-h725n\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.864086 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbsfg\" (UniqueName: \"kubernetes.io/projected/638a78f4-bdb3-4d78-8faf-b4bc299717d2-kube-api-access-rbsfg\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.866726 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.868470 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbsfg\" (UniqueName: \"kubernetes.io/projected/638a78f4-bdb3-4d78-8faf-b4bc299717d2-kube-api-access-rbsfg\") pod \"multus-bjlq7\" (UID: \"638a78f4-bdb3-4d78-8faf-b4bc299717d2\") " pod="openshift-multus/multus-bjlq7" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.870297 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h725n\" (UniqueName: \"kubernetes.io/projected/b6e81e9c-cc13-478a-91ce-6ad9d9c7d716-kube-api-access-h725n\") pod \"multus-additional-cni-plugins-sb8k8\" (UID: \"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\") " pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.879651 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.891885 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.904031 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.918911 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.932585 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.935164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.935203 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.935216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.935234 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.935248 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:03Z","lastTransitionTime":"2026-01-26T00:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.944231 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.960871 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.972467 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:03 crc kubenswrapper[4697]: I0126 00:08:03.998583 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:03Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.009710 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.022400 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.035244 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.036914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.037038 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.037158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.037234 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.037296 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.058592 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.073164 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.088686 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.102847 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.117691 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.132728 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.138178 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-bjlq7" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.139643 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.139676 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.139688 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.139706 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.139719 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.144789 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.145888 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.155555 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.171849 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.188034 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.201825 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.215844 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.241687 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.241719 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.241729 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.241743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.241752 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.348304 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.348339 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.348350 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.348366 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.348378 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.369385 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.369603 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:08:12.36958426 +0000 UTC m=+34.006361650 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.452498 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.452556 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.452570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.452588 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.452607 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.470198 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.470244 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.470281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.470311 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470327 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470377 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470393 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:12.470372932 +0000 UTC m=+34.107150322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470411 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470424 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:12.470411953 +0000 UTC m=+34.107189353 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470426 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470449 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470492 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:12.470481905 +0000 UTC m=+34.107259295 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470533 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470577 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470589 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.470658 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:12.47063939 +0000 UTC m=+34.107416780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.554859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.554897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.554907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.554922 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.554933 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.612328 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:04:51.879370184 +0000 UTC Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.657796 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.657838 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.657848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.657874 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.657890 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.659598 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.659635 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.659695 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.659708 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.659759 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:04 crc kubenswrapper[4697]: E0126 00:08:04.659862 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.760745 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.760783 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.760792 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.760806 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.760816 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.813302 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerStarted","Data":"2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.813354 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerStarted","Data":"c7c6c69dae149a564a9e56b5a3bff169ab0cf77783b62bbb59d31919fcee16ff"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.816546 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.816586 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.816597 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.821879 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bjlq7" event={"ID":"638a78f4-bdb3-4d78-8faf-b4bc299717d2","Type":"ContainerStarted","Data":"1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.821924 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bjlq7" event={"ID":"638a78f4-bdb3-4d78-8faf-b4bc299717d2","Type":"ContainerStarted","Data":"6f0a4939403a57f8859a1d43f7b0a0d45b37f42257f4f18b72cbe0b96ae8948b"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.842125 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.862799 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.862837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.862850 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.862867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.862878 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.872534 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.884584 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.895269 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.915337 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.927684 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.937084 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.951727 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.962189 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.964456 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.964490 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.964501 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.964519 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.964531 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:04Z","lastTransitionTime":"2026-01-26T00:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.974738 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:04 crc kubenswrapper[4697]: I0126 00:08:04.987365 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:04.999997 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:04Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.012774 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.030542 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.044221 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.058014 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.066841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.066871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.066881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.066896 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.066906 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.070433 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.086270 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.104705 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.116874 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.128901 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.138915 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.152433 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.169348 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.169544 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.169611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.169696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.169771 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.171351 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.188284 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.203419 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.214406 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.228646 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.273573 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.273907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.274034 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.274200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.274315 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.376909 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.377236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.377250 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.377267 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.377278 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.479031 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.479123 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.479140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.479165 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.479183 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.580820 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.580860 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.580868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.580881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.580892 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.613525 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:23:40.340917189 +0000 UTC Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.683591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.683648 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.683661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.683680 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.683692 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.786864 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.786943 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.786955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.786973 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.786986 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.831441 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.831506 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.831568 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.833915 4697 generic.go:334] "Generic (PLEG): container finished" podID="b6e81e9c-cc13-478a-91ce-6ad9d9c7d716" containerID="2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086" exitCode=0 Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.833965 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerDied","Data":"2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.856151 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.872936 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.889469 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.889514 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.889529 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.889549 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.889567 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.890739 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.904417 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.915017 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.927559 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.942834 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.954175 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.978913 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.990808 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:05Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.994846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.994896 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.994914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.994930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:05 crc kubenswrapper[4697]: I0126 00:08:05.994942 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:05Z","lastTransitionTime":"2026-01-26T00:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.005881 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.020022 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.032396 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.043835 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.098575 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.098605 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.098615 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.098629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.098638 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.202049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.202149 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.202166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.202192 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.202210 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.305435 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.305482 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.305493 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.305512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.305526 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.408666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.408701 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.408711 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.408725 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.408734 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.511613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.511692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.511709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.511736 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.511755 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.613709 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:43:37.096870907 +0000 UTC Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.614657 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.614733 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.614751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.614771 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.614785 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.659694 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:06 crc kubenswrapper[4697]: E0126 00:08:06.659812 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.659840 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.659855 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:06 crc kubenswrapper[4697]: E0126 00:08:06.659911 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:06 crc kubenswrapper[4697]: E0126 00:08:06.659972 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.717688 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.717717 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.717725 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.717737 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.717746 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.820675 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.820708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.820715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.820728 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.820738 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.838967 4697 generic.go:334] "Generic (PLEG): container finished" podID="b6e81e9c-cc13-478a-91ce-6ad9d9c7d716" containerID="70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9" exitCode=0 Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.839012 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerDied","Data":"70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.855174 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.870808 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.888258 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.899334 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.922522 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.922565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.922576 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.922593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.922605 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:06Z","lastTransitionTime":"2026-01-26T00:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.928413 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.942390 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.958880 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.968171 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.980052 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:06 crc kubenswrapper[4697]: I0126 00:08:06.992042 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:06Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.003119 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.003157 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.003165 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.003180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.003188 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.009412 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: E0126 00:08:07.014647 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.019553 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.019607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.019627 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.019651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.019668 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.019710 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: E0126 00:08:07.032260 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.036039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.036177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.036196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.036255 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.036273 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.039307 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: E0126 00:08:07.049564 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.049774 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.052572 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.052604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.052614 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.052631 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.052642 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: E0126 00:08:07.062813 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.066873 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.066955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.066974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.066995 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.067012 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: E0126 00:08:07.089223 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: E0126 00:08:07.089519 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.091141 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.091170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.091180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.091224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.091237 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.194394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.194452 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.194470 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.194492 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.194510 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.297645 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.297693 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.297709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.297730 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.297748 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.400285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.400335 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.400351 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.400372 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.400388 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.503768 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.503813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.503830 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.503852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.503867 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.607301 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.607362 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.607379 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.607408 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.607425 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.614902 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:25:40.627907473 +0000 UTC Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.709862 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.709911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.709933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.709966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.709988 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.813417 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.813458 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.813471 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.813489 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.813502 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.844114 4697 generic.go:334] "Generic (PLEG): container finished" podID="b6e81e9c-cc13-478a-91ce-6ad9d9c7d716" containerID="192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe" exitCode=0 Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.844195 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerDied","Data":"192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.850419 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.860868 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.876959 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.885724 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.897979 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.912324 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.916260 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.916296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.916304 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.916317 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.916326 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:07Z","lastTransitionTime":"2026-01-26T00:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.925637 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.936022 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.944696 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.955934 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.966906 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.977162 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:07 crc kubenswrapper[4697]: I0126 00:08:07.997012 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:07Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.008688 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.019138 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.019178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.019190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.019205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.019216 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.019635 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.121630 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.121681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.121693 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.121710 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.121722 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.224718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.224775 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.224793 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.224816 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.224833 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.327873 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.327936 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.327961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.327990 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.328011 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.431519 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.431591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.431607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.431634 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.431651 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.516702 4697 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.536878 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.536932 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.536950 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.536973 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.536991 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.616585 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:50:44.427320492 +0000 UTC Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.640417 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.640504 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.640524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.640544 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.640556 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.659695 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.659788 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:08 crc kubenswrapper[4697]: E0126 00:08:08.659930 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.659961 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:08 crc kubenswrapper[4697]: E0126 00:08:08.660137 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:08 crc kubenswrapper[4697]: E0126 00:08:08.660293 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.673639 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.688729 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.704173 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.725246 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.744066 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.744146 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.744164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.744188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.744206 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.746726 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.765167 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.782195 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.804767 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.820416 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.832717 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.844677 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.846083 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.846240 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.846316 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.846423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.846497 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.856935 4697 generic.go:334] "Generic (PLEG): container finished" podID="b6e81e9c-cc13-478a-91ce-6ad9d9c7d716" containerID="0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe" exitCode=0 Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.857025 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerDied","Data":"0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.857570 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.877240 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.892506 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.908721 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.925098 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.940266 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.949428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.949467 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.949479 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.949496 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.949507 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:08Z","lastTransitionTime":"2026-01-26T00:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.954407 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.965869 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.981130 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:08 crc kubenswrapper[4697]: I0126 00:08:08.993056 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:08Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.010866 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.021915 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.037568 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.049060 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.051558 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.051598 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.051612 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.051628 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.051638 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.061754 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.073323 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.090664 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.153643 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.153876 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.153935 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.153996 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.154063 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.256669 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.256718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.256731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.256747 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.256758 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.358829 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.358882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.358902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.358926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.358942 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.461478 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.461524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.461534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.461550 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.461560 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.564861 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.564929 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.564947 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.564973 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.564991 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.616999 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:05:35.827085133 +0000 UTC Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.667887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.667928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.667938 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.667956 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.667970 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.771605 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.771684 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.771707 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.771739 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.771760 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.865947 4697 generic.go:334] "Generic (PLEG): container finished" podID="b6e81e9c-cc13-478a-91ce-6ad9d9c7d716" containerID="7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956" exitCode=0 Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.866001 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerDied","Data":"7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.874328 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.874360 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.874370 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.874384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.874394 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.884746 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.909569 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.927600 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.945511 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.961480 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.974848 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:09Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.977989 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.978047 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.978066 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.978134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:09 crc kubenswrapper[4697]: I0126 00:08:09.978155 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:09Z","lastTransitionTime":"2026-01-26T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.011423 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.046189 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.067825 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.080529 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.080561 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.080569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.080590 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.080599 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.080789 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.092315 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.107065 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.119383 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.134929 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.183374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.183416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.183428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.183445 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.183458 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.286025 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.286065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.286087 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.286102 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.286111 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.389814 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.389890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.389912 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.389940 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.389962 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.496604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.496654 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.496684 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.496715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.496729 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.599048 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.599127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.599145 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.599166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.599182 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.617843 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:51:31.176456178 +0000 UTC Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.660431 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.660468 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:10 crc kubenswrapper[4697]: E0126 00:08:10.660974 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:10 crc kubenswrapper[4697]: E0126 00:08:10.660854 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.660532 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:10 crc kubenswrapper[4697]: E0126 00:08:10.661225 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.701964 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.702041 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.702059 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.702303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.702330 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.805716 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.805796 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.805822 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.805853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.805877 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.873706 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.874302 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.874324 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.879208 4697 generic.go:334] "Generic (PLEG): container finished" podID="b6e81e9c-cc13-478a-91ce-6ad9d9c7d716" containerID="fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3" exitCode=0 Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.879276 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerDied","Data":"fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.899030 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.903090 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.904710 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.908571 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.908601 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.908612 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.908629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.908640 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:10Z","lastTransitionTime":"2026-01-26T00:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.920922 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.938185 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.954292 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.969037 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.978972 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:10 crc kubenswrapper[4697]: I0126 00:08:10.993018 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:10Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.004225 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.010863 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.010892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.010903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.010920 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.010932 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.021237 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.031786 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.043296 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.055137 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.066384 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.076774 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.084690 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.096695 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.109886 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.113772 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.113819 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.113832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.113852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.113867 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.121274 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.135445 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.147929 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.161286 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.172933 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.185972 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.198543 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.209445 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.219592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.219636 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.219674 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.219691 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.219703 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.224420 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.250853 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.265191 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:11Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.322854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.323137 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.323381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.323511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.323611 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.427346 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.427758 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.428016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.428227 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.428604 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.531807 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.531858 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.531875 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.531899 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.531917 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.618559 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:43:15.555831107 +0000 UTC Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.634853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.635135 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.635272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.635384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.635470 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.738134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.738193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.738216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.738246 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.738268 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.841039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.841164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.841210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.841236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.841253 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.884573 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.945638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.945713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.945731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.945754 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:11 crc kubenswrapper[4697]: I0126 00:08:11.945771 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:11Z","lastTransitionTime":"2026-01-26T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.048902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.048963 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.048986 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.049015 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.049035 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.151932 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.151993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.152014 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.152039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.152057 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.255040 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.255143 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.255161 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.255188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.255205 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.357138 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.357188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.357200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.357218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.357231 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.448013 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.448269 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:08:28.448238931 +0000 UTC m=+50.085016341 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.459868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.459898 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.459907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.459921 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.459931 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.549382 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.549455 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.549528 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.549585 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549653 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549714 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549777 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:28.549748303 +0000 UTC m=+50.186525763 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549813 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:28.549789774 +0000 UTC m=+50.186567174 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549822 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549826 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549857 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549888 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549895 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.549915 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.550000 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:28.549971759 +0000 UTC m=+50.186749209 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.550040 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:08:28.55002301 +0000 UTC m=+50.186800510 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.562988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.563037 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.563051 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.563094 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.563110 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.619362 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:14:50.841470132 +0000 UTC Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.660248 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.660335 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.660253 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.660465 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.660677 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:12 crc kubenswrapper[4697]: E0126 00:08:12.660873 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.665164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.665200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.665211 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.665227 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.665239 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.767463 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.767506 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.767517 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.767533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.767545 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.869903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.869974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.869999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.870028 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.870047 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.893050 4697 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.893240 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" event={"ID":"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716","Type":"ContainerStarted","Data":"349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.913273 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:12Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.928613 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:12Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.946843 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:12Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.961684 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:12Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.972300 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.972569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.972752 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.972889 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.973006 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:12Z","lastTransitionTime":"2026-01-26T00:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.975378 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:12Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:12 crc kubenswrapper[4697]: I0126 00:08:12.994354 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:12Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.007905 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.027955 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.043546 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.063180 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.076538 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.076580 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.076589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.076604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.076613 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.080827 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.094482 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.108599 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.129797 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:13Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.179196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.179261 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.179331 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.179358 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.179378 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.282593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.282649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.282668 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.282691 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.282711 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.386448 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.386499 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.386508 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.386526 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.386535 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.489761 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.489815 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.489827 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.489846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.489857 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.592530 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.592599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.592624 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.592657 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.592680 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.620497 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:42:54.996592224 +0000 UTC Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.671555 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.696158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.696220 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.696238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.696264 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.696282 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.799116 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.799365 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.799476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.799555 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.799626 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.901374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.901416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.901428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.901444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:13 crc kubenswrapper[4697]: I0126 00:08:13.901455 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:13Z","lastTransitionTime":"2026-01-26T00:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.004147 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.004174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.004182 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.004202 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.004211 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.106158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.106189 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.106197 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.106210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.106218 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.208315 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.208359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.208371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.208388 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.208401 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.310822 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.310876 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.310894 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.310916 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.310932 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.413954 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.414028 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.414050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.414110 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.414137 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.517184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.517238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.517310 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.517328 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.517340 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.620426 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.620482 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.620499 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.620519 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.620534 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.621225 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:21:34.230543212 +0000 UTC Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.659938 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.660025 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.660089 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:14 crc kubenswrapper[4697]: E0126 00:08:14.660037 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:14 crc kubenswrapper[4697]: E0126 00:08:14.660183 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:14 crc kubenswrapper[4697]: E0126 00:08:14.660249 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.722112 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.722142 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.722150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.722161 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.722171 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.825775 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.825836 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.825854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.825881 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.825899 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.929544 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.929620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.929644 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.929696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.929725 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:14Z","lastTransitionTime":"2026-01-26T00:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.966470 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt"] Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.967362 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.970580 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.971145 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 00:08:14 crc kubenswrapper[4697]: I0126 00:08:14.993830 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:14Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.015171 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.032841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.032890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.032907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.032930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.032947 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.035322 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.050693 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.071695 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.076247 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ccf0abe4-7d69-43ad-aa11-747002f33846-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.076370 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvzs\" (UniqueName: \"kubernetes.io/projected/ccf0abe4-7d69-43ad-aa11-747002f33846-kube-api-access-bdvzs\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.076454 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ccf0abe4-7d69-43ad-aa11-747002f33846-env-overrides\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.076504 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ccf0abe4-7d69-43ad-aa11-747002f33846-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.091233 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.113177 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.135608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.135647 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.135661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.135677 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.135689 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.145175 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.156976 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.169251 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.177394 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ccf0abe4-7d69-43ad-aa11-747002f33846-env-overrides\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.177465 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ccf0abe4-7d69-43ad-aa11-747002f33846-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.177530 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ccf0abe4-7d69-43ad-aa11-747002f33846-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.177588 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvzs\" (UniqueName: \"kubernetes.io/projected/ccf0abe4-7d69-43ad-aa11-747002f33846-kube-api-access-bdvzs\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.178148 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ccf0abe4-7d69-43ad-aa11-747002f33846-env-overrides\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.178798 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ccf0abe4-7d69-43ad-aa11-747002f33846-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.183293 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.185004 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ccf0abe4-7d69-43ad-aa11-747002f33846-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.198650 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvzs\" (UniqueName: \"kubernetes.io/projected/ccf0abe4-7d69-43ad-aa11-747002f33846-kube-api-access-bdvzs\") pod \"ovnkube-control-plane-749d76644c-skrqt\" (UID: \"ccf0abe4-7d69-43ad-aa11-747002f33846\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.201989 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.218780 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.229563 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.239307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.239379 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.239399 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.239424 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.239443 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.245029 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.288634 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" Jan 26 00:08:15 crc kubenswrapper[4697]: W0126 00:08:15.303493 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccf0abe4_7d69_43ad_aa11_747002f33846.slice/crio-3187b6fc3c7111f775b206347a2e1cfffce14d6c1d13d7112eb60d258c45b3ec WatchSource:0}: Error finding container 3187b6fc3c7111f775b206347a2e1cfffce14d6c1d13d7112eb60d258c45b3ec: Status 404 returned error can't find the container with id 3187b6fc3c7111f775b206347a2e1cfffce14d6c1d13d7112eb60d258c45b3ec Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.342678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.342710 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.342743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.342756 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.342764 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.445310 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.445346 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.445354 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.445368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.445376 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.548371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.548412 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.548425 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.548443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.548456 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.622185 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:11:12.20391914 +0000 UTC Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.651708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.651743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.651756 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.651772 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.651785 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.753856 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.753899 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.753909 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.753925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.753936 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.825604 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.840174 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.851623 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.855780 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.855817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.855828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.855845 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.855858 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.860643 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.874019 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.889732 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.903941 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" event={"ID":"ccf0abe4-7d69-43ad-aa11-747002f33846","Type":"ContainerStarted","Data":"dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.903978 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" event={"ID":"ccf0abe4-7d69-43ad-aa11-747002f33846","Type":"ContainerStarted","Data":"4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.903988 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" event={"ID":"ccf0abe4-7d69-43ad-aa11-747002f33846","Type":"ContainerStarted","Data":"3187b6fc3c7111f775b206347a2e1cfffce14d6c1d13d7112eb60d258c45b3ec"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.906400 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/0.log" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.909236 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423" exitCode=1 Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.909261 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.909773 4697 scope.go:117] "RemoveContainer" containerID="e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.909776 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.922876 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.940430 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.952458 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.957750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.957789 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.957802 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.957820 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.957831 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:15Z","lastTransitionTime":"2026-01-26T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.964165 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:15 crc kubenswrapper[4697]: I0126 00:08:15.991498 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:15Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.006620 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.019670 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.030168 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.043951 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.060211 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.060244 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.060253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.060297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.060308 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.064447 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.082511 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.096656 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.105157 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xctft"] Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.105960 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.106060 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.115454 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.135039 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.146784 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.163534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.163657 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.163675 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.163692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.163704 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.163687 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.175561 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.188014 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92hq4\" (UniqueName: \"kubernetes.io/projected/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-kube-api-access-92hq4\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.188119 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.200253 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"message\\\":\\\"lpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 00:08:14.409162 6007 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:14.409174 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:14.409205 6007 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409221 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 00:08:14.409232 6007 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:14.409244 6007 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:14.409298 6007 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:14.409322 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:14.409343 6007 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 00:08:14.409362 6007 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409722 6007 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.211538 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.224715 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.242243 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.260207 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.265787 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.265826 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.265837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.265853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.265863 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.284772 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.288892 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.288942 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92hq4\" (UniqueName: \"kubernetes.io/projected/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-kube-api-access-92hq4\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.289065 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.289153 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs podName:dfdab702-3a9b-4646-ad6b-9bb9404e92ad nodeName:}" failed. No retries permitted until 2026-01-26 00:08:16.789131115 +0000 UTC m=+38.425908575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs") pod "network-metrics-daemon-xctft" (UID: "dfdab702-3a9b-4646-ad6b-9bb9404e92ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.306026 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.306439 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92hq4\" (UniqueName: \"kubernetes.io/projected/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-kube-api-access-92hq4\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.320001 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.335892 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"message\\\":\\\"lpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 00:08:14.409162 6007 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:14.409174 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:14.409205 6007 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409221 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 00:08:14.409232 6007 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:14.409244 6007 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:14.409298 6007 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:14.409322 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:14.409343 6007 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 00:08:14.409362 6007 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409722 6007 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.345524 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.354625 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.363169 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.367482 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.367652 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.367736 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.367801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.367872 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.374283 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.384825 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.395216 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.406804 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.417267 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.426429 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.440462 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.452376 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.464063 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.470492 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.470529 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.470537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.470551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.470559 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.474493 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.483841 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.572671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.572708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.572720 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.572735 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.572746 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.622467 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:45:04.351345815 +0000 UTC Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.660416 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.660485 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.660536 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.660561 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.660670 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.660832 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.674436 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.674493 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.674507 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.674522 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.674533 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.777504 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.777570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.777587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.777612 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.777629 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.794270 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.794445 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:16 crc kubenswrapper[4697]: E0126 00:08:16.794536 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs podName:dfdab702-3a9b-4646-ad6b-9bb9404e92ad nodeName:}" failed. No retries permitted until 2026-01-26 00:08:17.794513678 +0000 UTC m=+39.431291098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs") pod "network-metrics-daemon-xctft" (UID: "dfdab702-3a9b-4646-ad6b-9bb9404e92ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.880578 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.880632 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.880649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.880672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.880690 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.919433 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/0.log" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.924446 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.925395 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.948967 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.972686 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.983133 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.983381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.983525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.983674 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.983821 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:16Z","lastTransitionTime":"2026-01-26T00:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:16 crc kubenswrapper[4697]: I0126 00:08:16.994195 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:16Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.011849 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.036435 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.057779 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.086744 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.086821 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.086842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.086871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.086892 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.100243 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.119094 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.129674 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.146628 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.156900 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.167721 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.181643 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.189384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.189431 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.189449 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.189479 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.189495 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.207408 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"message\\\":\\\"lpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 00:08:14.409162 6007 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:14.409174 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:14.409205 6007 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409221 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 00:08:14.409232 6007 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:14.409244 6007 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:14.409298 6007 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:14.409322 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:14.409343 6007 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 00:08:14.409362 6007 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409722 6007 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.220433 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.231873 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.249788 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.250136 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.250361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.250569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.250761 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.269275 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.273763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.273900 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.274007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.274150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.274254 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.290412 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.294648 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.294680 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.294694 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.294757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.294773 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.312645 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.317424 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.317459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.317474 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.317495 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.317510 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.335183 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.338944 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.339146 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.339254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.339341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.339424 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.361263 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:17Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.361604 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.363903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.363950 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.363967 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.363988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.364004 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.466846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.466902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.466916 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.466935 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.466950 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.569885 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.569937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.569955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.569978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.569995 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.623854 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:33:18.624487169 +0000 UTC Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.660283 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.660535 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.672664 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.672709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.672727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.672751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.672768 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.775567 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.775685 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.775709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.775737 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.775757 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.804938 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.805190 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:17 crc kubenswrapper[4697]: E0126 00:08:17.805285 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs podName:dfdab702-3a9b-4646-ad6b-9bb9404e92ad nodeName:}" failed. No retries permitted until 2026-01-26 00:08:19.805262524 +0000 UTC m=+41.442039944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs") pod "network-metrics-daemon-xctft" (UID: "dfdab702-3a9b-4646-ad6b-9bb9404e92ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.878511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.878564 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.878581 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.878606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.878631 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.981905 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.982248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.982297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.982324 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:17 crc kubenswrapper[4697]: I0126 00:08:17.982343 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:17Z","lastTransitionTime":"2026-01-26T00:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.085551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.085656 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.085678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.085704 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.085722 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.189217 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.189277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.189295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.189321 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.189340 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.292512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.292570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.292587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.292613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.292631 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.396134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.396193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.396211 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.396235 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.396253 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.499868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.499974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.500011 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.500057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.500124 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.603461 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.603533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.603555 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.603583 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.603604 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.624225 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:30:43.768779048 +0000 UTC Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.659704 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.659815 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:18 crc kubenswrapper[4697]: E0126 00:08:18.659868 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.659921 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:18 crc kubenswrapper[4697]: E0126 00:08:18.660166 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:18 crc kubenswrapper[4697]: E0126 00:08:18.660294 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.677572 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.699289 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.706251 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.706304 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.706320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.706342 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.706360 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.722586 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.751038 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.776688 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.796917 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.809518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.809774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.809919 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.810065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.810325 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.814546 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.839191 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.858130 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.881099 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.898585 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.910532 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.912687 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.912735 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.912750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.912772 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.912788 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:18Z","lastTransitionTime":"2026-01-26T00:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.927972 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.936259 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/1.log" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.937037 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/0.log" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.940978 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881"} Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.941052 4697 scope.go:117] "RemoveContainer" containerID="e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.941883 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881" exitCode=1 Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.942133 4697 scope.go:117] "RemoveContainer" containerID="f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881" Jan 26 00:08:18 crc kubenswrapper[4697]: E0126 00:08:18.942383 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.953933 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"message\\\":\\\"lpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 00:08:14.409162 6007 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:14.409174 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:14.409205 6007 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409221 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 00:08:14.409232 6007 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:14.409244 6007 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:14.409298 6007 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:14.409322 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:14.409343 6007 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 00:08:14.409362 6007 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409722 6007 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.970187 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.981300 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:18 crc kubenswrapper[4697]: I0126 00:08:18.995715 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:18Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.007673 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.015150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.015199 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.015218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.015238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.015253 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.019234 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.029903 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.047608 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.057555 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.068457 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.078310 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.090514 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.107229 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.118767 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.118832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.118856 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.118887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.118909 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.122033 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.140607 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.156830 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.181128 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"message\\\":\\\"lpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 00:08:14.409162 6007 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:14.409174 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:14.409205 6007 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409221 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 00:08:14.409232 6007 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:14.409244 6007 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:14.409298 6007 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:14.409322 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:14.409343 6007 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 00:08:14.409362 6007 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409722 6007 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:18Z\\\",\\\"message\\\":\\\"m/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:16.695114 6221 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:16.695997 6221 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:16.696044 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 00:08:16.696055 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 00:08:16.696118 6221 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 00:08:16.696128 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 00:08:16.696152 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 00:08:16.696118 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:16.696204 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:16.696222 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:16.696223 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 00:08:16.696245 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:16.696272 6221 factory.go:656] Stopping watch factory\\\\nI0126 00:08:16.696287 6221 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.201477 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.219459 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:19Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.221052 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.221128 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.221144 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.221160 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.221172 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.323725 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.323824 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.323842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.323869 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.323887 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.426394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.426441 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.426460 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.426483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.426500 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.528783 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.528855 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.528878 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.528910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.528951 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.625177 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:18:20.045022744 +0000 UTC Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.631953 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.632015 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.632037 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.632064 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.632133 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.660292 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:19 crc kubenswrapper[4697]: E0126 00:08:19.660487 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.735367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.735426 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.735443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.735467 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.735487 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.827603 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:19 crc kubenswrapper[4697]: E0126 00:08:19.827792 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:19 crc kubenswrapper[4697]: E0126 00:08:19.827868 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs podName:dfdab702-3a9b-4646-ad6b-9bb9404e92ad nodeName:}" failed. No retries permitted until 2026-01-26 00:08:23.827844287 +0000 UTC m=+45.464621717 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs") pod "network-metrics-daemon-xctft" (UID: "dfdab702-3a9b-4646-ad6b-9bb9404e92ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.837819 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.837903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.837921 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.837945 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.837962 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.940768 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.940814 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.940831 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.940852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.940868 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:19Z","lastTransitionTime":"2026-01-26T00:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:19 crc kubenswrapper[4697]: I0126 00:08:19.949903 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/1.log" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.043541 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.043624 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.043649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.043672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.043688 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.146993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.147388 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.147534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.147681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.147811 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.250732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.251121 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.251309 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.251528 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.251708 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.355120 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.355162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.355176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.355195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.355210 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.457458 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.457527 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.457538 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.457554 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.457565 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.560249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.560288 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.560296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.560311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.560320 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.625590 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:21:33.966567476 +0000 UTC Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.659676 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.659738 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.659675 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:20 crc kubenswrapper[4697]: E0126 00:08:20.659885 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:20 crc kubenswrapper[4697]: E0126 00:08:20.660024 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:20 crc kubenswrapper[4697]: E0126 00:08:20.660153 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.662769 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.662843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.662867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.662937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.662964 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.766421 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.766806 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.766974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.767235 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.767462 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.870425 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.870852 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.871014 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.871227 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.871368 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.973988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.974385 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.974525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.974658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:20 crc kubenswrapper[4697]: I0126 00:08:20.974797 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:20Z","lastTransitionTime":"2026-01-26T00:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.077607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.077656 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.077672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.077696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.077713 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.180616 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.180713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.180731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.180753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.180774 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.284172 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.284230 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.284251 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.284275 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.284293 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.387479 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.387564 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.387584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.387610 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.387633 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.490850 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.490911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.490930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.490955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.490977 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.593939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.594008 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.594026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.594051 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.594112 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.626658 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:02:09.464267055 +0000 UTC Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.660469 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:21 crc kubenswrapper[4697]: E0126 00:08:21.660661 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.697330 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.697405 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.697432 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.697466 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.697487 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.800838 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.800916 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.800932 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.800958 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.800979 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.903718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.903801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.903825 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.903853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:21 crc kubenswrapper[4697]: I0126 00:08:21.903871 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:21Z","lastTransitionTime":"2026-01-26T00:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.007306 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.007377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.007402 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.007434 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.007457 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.110058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.110141 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.110158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.110181 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.110199 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.214495 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.214565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.214588 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.214613 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.214632 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.318001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.318063 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.318117 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.318143 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.318161 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.422016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.422125 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.422144 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.422169 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.422186 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.525407 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.525465 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.525488 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.525516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.525540 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.626859 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:30:40.008157625 +0000 UTC Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.629121 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.629185 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.629202 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.629225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.629245 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.659784 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.659911 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:22 crc kubenswrapper[4697]: E0126 00:08:22.659948 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.659982 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:22 crc kubenswrapper[4697]: E0126 00:08:22.660207 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:22 crc kubenswrapper[4697]: E0126 00:08:22.660336 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.736515 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.736575 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.736593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.736617 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.736634 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.840140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.840227 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.840245 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.840270 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.840287 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.943315 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.943396 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.943419 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.943448 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:22 crc kubenswrapper[4697]: I0126 00:08:22.943470 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:22Z","lastTransitionTime":"2026-01-26T00:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.045850 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.045914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.045926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.045942 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.045954 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.148743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.148826 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.148850 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.148880 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.148899 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.252257 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.252334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.252351 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.252377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.252398 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.355512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.355580 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.355598 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.355622 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.355642 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.458911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.458981 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.458999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.459026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.459044 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.562618 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.562681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.562701 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.562730 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.562749 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.627618 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:08:05.012039908 +0000 UTC Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.660445 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:23 crc kubenswrapper[4697]: E0126 00:08:23.660673 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.665951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.666021 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.666039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.666062 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.666119 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.770327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.771255 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.771294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.771320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.771340 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.875194 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.875238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.875257 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.875280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.875296 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.878512 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:23 crc kubenswrapper[4697]: E0126 00:08:23.878736 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:23 crc kubenswrapper[4697]: E0126 00:08:23.878823 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs podName:dfdab702-3a9b-4646-ad6b-9bb9404e92ad nodeName:}" failed. No retries permitted until 2026-01-26 00:08:31.878799408 +0000 UTC m=+53.515576838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs") pod "network-metrics-daemon-xctft" (UID: "dfdab702-3a9b-4646-ad6b-9bb9404e92ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.978181 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.978237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.978256 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.978278 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:23 crc kubenswrapper[4697]: I0126 00:08:23.978297 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:23Z","lastTransitionTime":"2026-01-26T00:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.081541 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.081597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.081614 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.081639 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.081656 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.185170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.185221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.185237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.185259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.185276 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.288108 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.288169 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.288191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.288219 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.288239 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.391617 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.391698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.391715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.391741 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.391761 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.494910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.494969 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.494986 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.495009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.495027 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.598502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.598573 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.598591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.598618 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.598637 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.628153 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:24:51.900888217 +0000 UTC Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.659895 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.659936 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:24 crc kubenswrapper[4697]: E0126 00:08:24.660041 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.660108 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:24 crc kubenswrapper[4697]: E0126 00:08:24.660647 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:24 crc kubenswrapper[4697]: E0126 00:08:24.660542 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.702608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.702663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.702681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.702708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.702730 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.805933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.805998 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.806023 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.806054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.806114 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.908927 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.909006 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.909030 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.909057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:24 crc kubenswrapper[4697]: I0126 00:08:24.909127 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:24Z","lastTransitionTime":"2026-01-26T00:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.011882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.011920 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.011930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.011946 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.011957 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.114705 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.114760 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.114776 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.114798 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.114815 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.219732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.219812 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.219838 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.219888 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.219913 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.322877 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.322923 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.322934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.322950 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.322962 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.425801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.425863 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.425885 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.425915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.425937 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.529648 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.529719 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.529757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.529787 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.529808 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.628689 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:24:43.397700614 +0000 UTC Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.633276 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.633376 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.633396 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.633420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.633435 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.659943 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:25 crc kubenswrapper[4697]: E0126 00:08:25.660205 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.735753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.735815 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.735833 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.735859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.735875 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.838619 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.838684 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.838706 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.838735 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.838759 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.941414 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.941479 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.941499 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.941524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:25 crc kubenswrapper[4697]: I0126 00:08:25.941541 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:25Z","lastTransitionTime":"2026-01-26T00:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.045377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.045456 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.045480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.045511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.045535 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.148701 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.148770 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.148794 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.148825 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.148846 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.252172 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.252244 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.252266 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.252293 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.252314 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.355124 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.355190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.355210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.355236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.355253 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.459262 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.459370 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.459397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.459439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.459468 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.563705 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.563777 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.563793 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.563818 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.563835 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.629687 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:39:14.032015741 +0000 UTC Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.660229 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.660313 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:26 crc kubenswrapper[4697]: E0126 00:08:26.660485 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.660545 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:26 crc kubenswrapper[4697]: E0126 00:08:26.660972 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:26 crc kubenswrapper[4697]: E0126 00:08:26.661178 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.666534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.666596 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.666621 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.666649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.666673 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.770276 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.770359 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.770383 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.770418 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.770441 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.873849 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.873935 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.873960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.873992 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.874014 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.978001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.978111 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.978140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.978168 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:26 crc kubenswrapper[4697]: I0126 00:08:26.978186 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:26Z","lastTransitionTime":"2026-01-26T00:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.081600 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.081661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.081686 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.081720 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.081742 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.185410 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.185727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.185865 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.185988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.186153 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.289614 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.290027 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.290231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.290415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.290580 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.394238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.394592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.394752 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.394987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.395186 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.498910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.499125 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.499157 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.499188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.499210 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.572701 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.572777 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.572807 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.572838 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.572858 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: E0126 00:08:27.596862 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:27Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.602841 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.603133 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.603155 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.603184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.603204 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: E0126 00:08:27.628496 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:27Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.630158 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:06:23.901103207 +0000 UTC Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.634815 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.634924 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.634952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.634993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.635020 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: E0126 00:08:27.656876 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:27Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.660187 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:27 crc kubenswrapper[4697]: E0126 00:08:27.660437 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.662882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.662933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.662952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.662975 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.662991 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: E0126 00:08:27.683803 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:27Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.690303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.690378 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.690394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.690428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.690459 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: E0126 00:08:27.712690 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:27Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:27 crc kubenswrapper[4697]: E0126 00:08:27.712930 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.715493 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.715565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.715585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.715615 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.715635 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.819272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.819334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.819347 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.819372 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.819394 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.923544 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.923635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.923659 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.923692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:27 crc kubenswrapper[4697]: I0126 00:08:27.923719 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:27Z","lastTransitionTime":"2026-01-26T00:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.025706 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.025740 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.025748 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.025761 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.025769 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.129272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.129337 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.129355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.129378 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.129397 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.232671 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.232736 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.232753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.232780 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.232799 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.337334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.337396 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.337414 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.337439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.337454 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.440109 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.440180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.440200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.440225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.440240 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.538799 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.539208 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:00.539154691 +0000 UTC m=+82.175932121 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.541869 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.543866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.543930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.543944 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.543972 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.543989 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.560424 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.571856 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"message\\\":\\\"lpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 00:08:14.409162 6007 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:14.409174 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:14.409205 6007 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409221 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 00:08:14.409232 6007 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:14.409244 6007 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:14.409298 6007 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:14.409322 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:14.409343 6007 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 00:08:14.409362 6007 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409722 6007 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:18Z\\\",\\\"message\\\":\\\"m/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:16.695114 6221 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:16.695997 6221 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:16.696044 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 00:08:16.696055 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 00:08:16.696118 6221 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 00:08:16.696128 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 00:08:16.696152 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 00:08:16.696118 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:16.696204 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:16.696222 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:16.696223 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 00:08:16.696245 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:16.696272 6221 factory.go:656] Stopping watch factory\\\\nI0126 00:08:16.696287 6221 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.590380 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.606462 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.623478 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.631160 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:46:42.224712423 +0000 UTC Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.640236 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.640565 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.640648 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.640722 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.640752 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.640779 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.640834 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:09:00.640807548 +0000 UTC m=+82.277584968 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.640833 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.640927 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:09:00.64090511 +0000 UTC m=+82.277682560 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.640951 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.640965 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.641037 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.641060 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.640978 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.641159 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.641175 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:09:00.641149187 +0000 UTC m=+82.277926617 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.641235 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:09:00.641211619 +0000 UTC m=+82.277989049 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.651303 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.651357 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.651374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.651429 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.651449 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.659752 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.659809 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.659861 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.659941 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.660052 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:28 crc kubenswrapper[4697]: E0126 00:08:28.660318 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.666489 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.682494 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.698303 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.720977 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.742469 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.753551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.753597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.753611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.753635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.753652 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.764328 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.778023 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.796683 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.809789 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.825429 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.847419 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.856812 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.856871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.856913 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.856954 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.856979 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.868361 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.887820 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.908709 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.931538 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.948826 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.960430 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.960480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.960494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.960512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.960524 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:28Z","lastTransitionTime":"2026-01-26T00:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.973238 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:28 crc kubenswrapper[4697]: I0126 00:08:28.993848 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:28Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.010546 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.027902 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.043441 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.062785 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.064332 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.064384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.064401 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.064429 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.064446 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.079262 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.101740 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.116767 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.131891 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.145489 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.167891 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.167948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.167968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.167996 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.168015 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.180200 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e52b8559fbce35200501482adfe6f4c9c1b6a77bda9aae79070381a04d36a423\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"message\\\":\\\"lpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 00:08:14.409162 6007 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:14.409174 6007 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:14.409205 6007 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409221 6007 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 00:08:14.409232 6007 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:14.409244 6007 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:14.409298 6007 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:14.409322 6007 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:14.409343 6007 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 00:08:14.409362 6007 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:14.409722 6007 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:18Z\\\",\\\"message\\\":\\\"m/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:16.695114 6221 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:16.695997 6221 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:16.696044 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 00:08:16.696055 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 00:08:16.696118 6221 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 00:08:16.696128 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 00:08:16.696152 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 00:08:16.696118 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:16.696204 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:16.696222 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:16.696223 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 00:08:16.696245 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:16.696272 6221 factory.go:656] Stopping watch factory\\\\nI0126 00:08:16.696287 6221 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:29Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.270590 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.270632 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.270644 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.270662 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.270674 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.374055 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.374156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.374182 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.374212 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.374358 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.477496 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.477556 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.477589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.477615 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.477639 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.581291 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.581388 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.581415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.581444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.581465 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.631331 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:44:50.787967913 +0000 UTC Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.660011 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:29 crc kubenswrapper[4697]: E0126 00:08:29.660295 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.684696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.684971 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.685153 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.685311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.685438 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.788367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.788454 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.788478 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.788512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.788537 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.892242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.892330 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.892353 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.892382 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.892404 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.996401 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.996840 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.997166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.997419 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:29 crc kubenswrapper[4697]: I0126 00:08:29.997594 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:29Z","lastTransitionTime":"2026-01-26T00:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.101131 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.101205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.101228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.101266 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.101302 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.204813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.204885 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.204904 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.204932 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.204953 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.308577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.308649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.308668 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.308696 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.308717 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.411831 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.411895 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.411914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.411939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.411958 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.514915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.515006 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.515025 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.515049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.515065 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.617844 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.617892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.617909 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.617932 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.617949 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.631876 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 02:30:13.150484742 +0000 UTC Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.659769 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.659823 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.659768 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:30 crc kubenswrapper[4697]: E0126 00:08:30.660009 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:30 crc kubenswrapper[4697]: E0126 00:08:30.660097 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:30 crc kubenswrapper[4697]: E0126 00:08:30.660164 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.720305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.720371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.720394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.720420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.720438 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.823631 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.823701 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.823724 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.823751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.823768 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.927215 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.927276 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.927294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.927320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:30 crc kubenswrapper[4697]: I0126 00:08:30.927339 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:30Z","lastTransitionTime":"2026-01-26T00:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.030560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.030626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.030642 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.030668 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.030685 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.133876 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.133955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.133979 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.134013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.134037 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.240744 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.240840 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.240933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.240966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.240990 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.345196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.345330 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.345351 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.345381 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.345402 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.447699 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.447748 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.447760 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.447780 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.447795 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.551191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.551268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.551287 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.551316 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.551339 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.632711 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:30:53.02523126 +0000 UTC Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.654288 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.654367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.654392 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.654430 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.654459 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.659419 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:31 crc kubenswrapper[4697]: E0126 00:08:31.659569 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.757176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.757231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.757242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.757264 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.757278 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.860274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.860340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.860358 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.860397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.860435 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.879042 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:31 crc kubenswrapper[4697]: E0126 00:08:31.879293 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:31 crc kubenswrapper[4697]: E0126 00:08:31.879387 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs podName:dfdab702-3a9b-4646-ad6b-9bb9404e92ad nodeName:}" failed. No retries permitted until 2026-01-26 00:08:47.879361179 +0000 UTC m=+69.516138609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs") pod "network-metrics-daemon-xctft" (UID: "dfdab702-3a9b-4646-ad6b-9bb9404e92ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.963348 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.963391 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.963405 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.963422 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:31 crc kubenswrapper[4697]: I0126 00:08:31.963433 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:31Z","lastTransitionTime":"2026-01-26T00:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.067579 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.067644 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.067658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.067687 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.067700 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.170966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.171043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.171064 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.171126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.171148 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.273890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.273976 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.273994 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.274026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.274047 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.377335 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.377418 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.377442 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.377473 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.377499 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.480100 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.480157 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.480172 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.480221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.480238 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.583090 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.583147 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.583162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.583184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.583200 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.633371 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 20:43:17.17515435 +0000 UTC Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.659738 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.659779 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.659884 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:32 crc kubenswrapper[4697]: E0126 00:08:32.660115 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:32 crc kubenswrapper[4697]: E0126 00:08:32.660544 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:32 crc kubenswrapper[4697]: E0126 00:08:32.660317 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.685426 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.685468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.685479 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.685495 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.685506 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.788482 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.788534 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.788554 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.788575 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.788594 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.891551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.891586 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.891595 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.891629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.891637 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.995095 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.995140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.995153 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.995172 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:32 crc kubenswrapper[4697]: I0126 00:08:32.995184 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:32Z","lastTransitionTime":"2026-01-26T00:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.098277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.098327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.098341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.098364 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.098380 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.201184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.201248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.201268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.201289 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.201306 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.304532 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.304585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.304604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.304627 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.304644 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.407189 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.407235 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.407249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.407268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.407281 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.510731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.510810 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.510835 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.510866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.510889 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.613580 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.613681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.613701 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.613757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.613775 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.634059 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:00:22.274766894 +0000 UTC Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.659600 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:33 crc kubenswrapper[4697]: E0126 00:08:33.659760 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.660360 4697 scope.go:117] "RemoveContainer" containerID="f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.682030 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.711833 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:18Z\\\",\\\"message\\\":\\\"m/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:16.695114 6221 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:16.695997 6221 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:16.696044 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 00:08:16.696055 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 00:08:16.696118 6221 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 00:08:16.696128 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 00:08:16.696152 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 00:08:16.696118 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:16.696204 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:16.696222 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:16.696223 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 00:08:16.696245 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:16.696272 6221 factory.go:656] Stopping watch factory\\\\nI0126 00:08:16.696287 6221 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.716390 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.716485 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.716500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.716517 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.716528 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.724476 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.741342 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.755493 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.768539 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.781746 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.795250 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.809471 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.818599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.818634 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.818644 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.818663 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.818674 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.825772 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.838691 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.854390 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.871548 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.889116 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.901577 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.912558 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.921599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.921629 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.921637 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.921650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.921659 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:33Z","lastTransitionTime":"2026-01-26T00:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:33 crc kubenswrapper[4697]: I0126 00:08:33.927423 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:33Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.011555 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/1.log" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.014255 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.014730 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.023866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.023904 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.023916 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.023936 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.023959 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.038367 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.059287 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.076474 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.091807 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.107554 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.121097 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.126002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.126037 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.126049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.126086 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.126100 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.134291 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.143108 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.153709 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.173116 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:18Z\\\",\\\"message\\\":\\\"m/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:16.695114 6221 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:16.695997 6221 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:16.696044 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 00:08:16.696055 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 00:08:16.696118 6221 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 00:08:16.696128 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 00:08:16.696152 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 00:08:16.696118 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:16.696204 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:16.696222 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:16.696223 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 00:08:16.696245 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:16.696272 6221 factory.go:656] Stopping watch factory\\\\nI0126 00:08:16.696287 6221 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.184785 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.202918 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.226713 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.228749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.228777 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.228785 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.228798 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.228807 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.243024 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.259671 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.271884 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.281103 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.332468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.332512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.332524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.332549 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.332558 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.434868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.434928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.434945 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.434968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.434984 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.537642 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.537686 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.537698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.537716 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.537728 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.634378 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 16:29:25.09666663 +0000 UTC Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.640418 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.640463 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.640477 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.640498 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.640512 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.659923 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.660052 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:34 crc kubenswrapper[4697]: E0126 00:08:34.660174 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.660210 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:34 crc kubenswrapper[4697]: E0126 00:08:34.660337 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:34 crc kubenswrapper[4697]: E0126 00:08:34.660395 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.742991 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.743024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.743033 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.743045 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.743053 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.845980 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.846015 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.846028 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.846043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.846053 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.949900 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.949937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.949947 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.949961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:34 crc kubenswrapper[4697]: I0126 00:08:34.949971 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:34Z","lastTransitionTime":"2026-01-26T00:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.020329 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/2.log" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.021135 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/1.log" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.024718 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" exitCode=1 Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.024834 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.024927 4697 scope.go:117] "RemoveContainer" containerID="f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.026048 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:08:35 crc kubenswrapper[4697]: E0126 00:08:35.026332 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.047243 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.052959 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.053006 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.053022 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.053047 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.053067 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.066946 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.084105 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.097304 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.113007 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.128222 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.143021 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.155528 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.155800 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.155931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.155951 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.156206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.156337 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.179535 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f045385a08f6f0b2af54643941f104d229ec058f95af04dd007dba5d54360881\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:18Z\\\",\\\"message\\\":\\\"m/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 00:08:16.695114 6221 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 00:08:16.695997 6221 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 00:08:16.696044 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 00:08:16.696055 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 00:08:16.696118 6221 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 00:08:16.696128 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 00:08:16.696152 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 00:08:16.696118 6221 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 00:08:16.696204 6221 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 00:08:16.696222 6221 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 00:08:16.696223 6221 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 00:08:16.696245 6221 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 00:08:16.696272 6221 factory.go:656] Stopping watch factory\\\\nI0126 00:08:16.696287 6221 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:34Z\\\",\\\"message\\\":\\\"on-xctft\\\\nF0126 00:08:34.500245 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z]\\\\nI0126 00:08:34.500295 6367 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xctft in node crc\\\\nI0126 00:08:34.500047 6367 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0126 00:08:34.500178 6367 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.194871 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.211163 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.227232 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.242494 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.254659 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.258483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.258504 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.258512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.258525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.258533 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.273842 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.288884 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.304956 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:35Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.361184 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.361237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.361259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.361286 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.361307 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.464216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.464271 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.464288 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.464311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.464327 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.567670 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.567946 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.568155 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.568297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.568445 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.634774 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 00:38:07.782337244 +0000 UTC Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.659805 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:35 crc kubenswrapper[4697]: E0126 00:08:35.659986 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.671000 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.671105 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.671133 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.671166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.671189 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.774511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.774568 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.774584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.774607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.774624 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.877371 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.877439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.877456 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.877480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.877497 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.980021 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.980107 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.980125 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.980149 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:35 crc kubenswrapper[4697]: I0126 00:08:35.980167 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:35Z","lastTransitionTime":"2026-01-26T00:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.032613 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/2.log" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.038154 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:08:36 crc kubenswrapper[4697]: E0126 00:08:36.038688 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.064721 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.083523 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.083620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.083640 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.083681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.083700 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.086554 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.105480 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.122744 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.140158 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.152035 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.167995 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.184306 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.185957 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.186034 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.186104 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.186142 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.186169 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.214433 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:34Z\\\",\\\"message\\\":\\\"on-xctft\\\\nF0126 00:08:34.500245 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z]\\\\nI0126 00:08:34.500295 6367 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xctft in node crc\\\\nI0126 00:08:34.500047 6367 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0126 00:08:34.500178 6367 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.230845 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.249139 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.267329 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.287886 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.289395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.289473 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.289500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.289537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.289561 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.306485 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.322221 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.335580 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.356516 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:36Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.393056 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.393156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.393174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.393198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.393215 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.495951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.496029 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.496057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.496119 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.496144 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.599626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.599681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.599697 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.599721 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.599739 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.636529 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:02:54.921549598 +0000 UTC Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.660432 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.660445 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.660725 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:36 crc kubenswrapper[4697]: E0126 00:08:36.660619 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:36 crc kubenswrapper[4697]: E0126 00:08:36.660841 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:36 crc kubenswrapper[4697]: E0126 00:08:36.660957 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.703307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.703386 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.703408 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.703436 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.703453 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.806194 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.806250 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.806267 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.806292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.806306 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.910759 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.910855 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.910882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.910917 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:36 crc kubenswrapper[4697]: I0126 00:08:36.910951 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:36Z","lastTransitionTime":"2026-01-26T00:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.013784 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.013853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.013876 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.013905 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.013928 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.117428 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.117475 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.117490 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.117511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.117526 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.220872 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.220948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.220975 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.221008 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.221034 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.323715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.323774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.323792 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.323817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.323835 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.426448 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.426594 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.426623 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.426645 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.426662 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.529347 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.529411 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.529444 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.529483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.529508 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.632800 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.632869 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.632890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.632919 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.632940 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.637308 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 08:26:47.659109328 +0000 UTC Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.659969 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:37 crc kubenswrapper[4697]: E0126 00:08:37.660219 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.736335 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.736397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.736416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.736439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.736456 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.839651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.839790 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.839810 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.839834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.839852 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.941597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.941666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.941684 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.941712 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.941735 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: E0126 00:08:37.963935 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.969627 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.969675 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.969690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.969716 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.969734 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:37 crc kubenswrapper[4697]: E0126 00:08:37.990397 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.995531 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.995577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.995594 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.995618 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:37 crc kubenswrapper[4697]: I0126 00:08:37.995635 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:37Z","lastTransitionTime":"2026-01-26T00:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: E0126 00:08:38.015529 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.020335 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.020384 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.020405 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.020427 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.020445 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: E0126 00:08:38.040651 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.046685 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.046736 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.046753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.046773 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.046789 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: E0126 00:08:38.067950 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: E0126 00:08:38.068215 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.069999 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.070050 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.070067 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.070111 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.070127 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.173043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.173162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.173182 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.173206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.173224 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.276295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.276367 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.276401 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.276432 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.276455 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.379939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.380013 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.380030 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.380053 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.380108 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.483198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.483254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.483270 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.483292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.483309 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.587044 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.587134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.587152 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.587177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.587195 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.637875 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:54:55.778740797 +0000 UTC Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.659563 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.659629 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:38 crc kubenswrapper[4697]: E0126 00:08:38.659767 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:38 crc kubenswrapper[4697]: E0126 00:08:38.659895 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.660343 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:38 crc kubenswrapper[4697]: E0126 00:08:38.660537 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.682347 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.690714 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.690770 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.690789 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.690813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.690831 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.715276 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:34Z\\\",\\\"message\\\":\\\"on-xctft\\\\nF0126 00:08:34.500245 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z]\\\\nI0126 00:08:34.500295 6367 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xctft in node crc\\\\nI0126 00:08:34.500047 6367 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0126 00:08:34.500178 6367 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.734613 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.755022 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.778998 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.794198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.794267 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.794286 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.794313 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.794331 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.798759 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.820957 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.842287 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.858843 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.883811 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.896728 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.896798 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.896826 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.896855 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.896879 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:38Z","lastTransitionTime":"2026-01-26T00:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.903585 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.925060 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.943869 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.960539 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:38 crc kubenswrapper[4697]: I0126 00:08:38.982780 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.000003 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.000057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:38.999953 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.000152 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.000188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.000210 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.022344 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.103241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.103318 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.103338 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.103363 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.103380 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.206035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.206139 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.206163 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.206190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.206207 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.308911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.308961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.308978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.309002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.309021 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.411834 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.411892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.411910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.411935 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.411953 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.516126 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.516251 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.516279 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.516368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.516443 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.620205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.620255 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.620271 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.620294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.620310 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.638938 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:51:48.907027213 +0000 UTC Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.660434 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:39 crc kubenswrapper[4697]: E0126 00:08:39.660633 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.723166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.723213 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.723225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.723241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.723252 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.826006 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.826061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.826111 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.826137 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.826154 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.928400 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.928459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.928475 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.928497 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:39 crc kubenswrapper[4697]: I0126 00:08:39.928515 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:39Z","lastTransitionTime":"2026-01-26T00:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.031661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.031722 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.031740 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.031764 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.031781 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.135137 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.135200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.135220 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.135245 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.135262 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.238987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.239058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.239127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.239162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.239187 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.342415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.342464 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.342476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.342494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.342506 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.445637 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.445677 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.445689 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.445705 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.445717 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.548018 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.548139 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.548160 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.548186 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.548205 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.639139 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:11:20.586107193 +0000 UTC Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.651957 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.651983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.651992 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.652026 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.652037 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.662244 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.662304 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.662361 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:40 crc kubenswrapper[4697]: E0126 00:08:40.662409 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:40 crc kubenswrapper[4697]: E0126 00:08:40.662563 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:40 crc kubenswrapper[4697]: E0126 00:08:40.662730 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.755407 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.755475 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.755499 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.755529 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.755551 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.857983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.858017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.858028 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.858043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.858055 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.961896 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.961949 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.961965 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.961989 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:40 crc kubenswrapper[4697]: I0126 00:08:40.962006 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:40Z","lastTransitionTime":"2026-01-26T00:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.064512 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.064565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.064582 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.064604 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.064621 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.167866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.167933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.167950 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.167974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.167992 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.271650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.271735 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.271751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.271772 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.271788 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.374415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.374471 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.374491 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.374516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.374534 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.477678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.477720 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.477731 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.477747 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.477760 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.580224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.580305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.580327 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.580353 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.580370 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.639308 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:02:57.374498689 +0000 UTC Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.659692 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:41 crc kubenswrapper[4697]: E0126 00:08:41.659883 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.683039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.683158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.683180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.683207 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.683225 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.785763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.785816 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.785828 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.785848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.785860 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.889028 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.889132 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.889156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.889187 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.889209 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.992094 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.992128 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.992136 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.992149 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:41 crc kubenswrapper[4697]: I0126 00:08:41.992157 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:41Z","lastTransitionTime":"2026-01-26T00:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.093761 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.093812 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.093825 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.093843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.093857 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.196158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.196205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.196216 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.196236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.196250 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.299975 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.300029 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.300045 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.300093 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.300112 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.402658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.402689 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.402699 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.402715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.402725 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.505683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.505738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.505752 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.505775 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.505790 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.608757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.608801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.608813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.608832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.608844 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.640342 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:06:19.541020849 +0000 UTC Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.659650 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.659727 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.659797 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:42 crc kubenswrapper[4697]: E0126 00:08:42.659786 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:42 crc kubenswrapper[4697]: E0126 00:08:42.659927 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:42 crc kubenswrapper[4697]: E0126 00:08:42.660012 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.711148 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.711208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.711224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.711242 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.711256 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.813551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.813856 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.813939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.814040 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.814162 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.916971 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.917010 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.917019 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.917033 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:42 crc kubenswrapper[4697]: I0126 00:08:42.917046 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:42Z","lastTransitionTime":"2026-01-26T00:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.019465 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.019496 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.019503 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.019516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.019525 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.122883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.122925 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.122934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.122948 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.122958 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.225501 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.225555 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.225572 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.225596 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.225613 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.329644 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.329698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.329715 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.329741 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.329762 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.432545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.432656 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.432737 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.432805 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.432829 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.535285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.535339 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.535353 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.535372 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.535385 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.637751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.638024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.638036 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.638052 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.638065 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.640945 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:58:48.206799891 +0000 UTC Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.660217 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:43 crc kubenswrapper[4697]: E0126 00:08:43.660330 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.741196 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.741237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.741248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.741267 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.741278 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.843606 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.843637 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.843647 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.843661 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.843670 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.946459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.946491 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.946502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.946516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:43 crc kubenswrapper[4697]: I0126 00:08:43.946527 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:43Z","lastTransitionTime":"2026-01-26T00:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.049249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.049282 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.049292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.049307 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.049317 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.151751 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.151788 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.151800 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.151815 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.151826 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.253940 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.253979 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.253990 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.254007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.254019 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.356832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.356877 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.356887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.356903 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.356913 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.459439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.459490 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.459506 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.459533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.459550 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.562101 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.562150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.562166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.562191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.562208 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.641349 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:58:05.457095889 +0000 UTC Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.659845 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.659936 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:44 crc kubenswrapper[4697]: E0126 00:08:44.659992 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.660023 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:44 crc kubenswrapper[4697]: E0126 00:08:44.660212 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:44 crc kubenswrapper[4697]: E0126 00:08:44.660335 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.664773 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.664809 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.664817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.664831 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.664841 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.767700 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.767766 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.767788 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.767849 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.767872 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.870170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.870210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.870222 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.870237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.870249 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.972419 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.972465 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.972477 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.972494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:44 crc kubenswrapper[4697]: I0126 00:08:44.972507 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:44Z","lastTransitionTime":"2026-01-26T00:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.074921 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.074970 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.074983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.075001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.075014 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.177154 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.177229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.177241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.177260 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.177272 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.279689 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.279733 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.279750 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.279768 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.279779 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.382165 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.382206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.382218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.382233 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.382244 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.485963 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.486024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.486049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.486143 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.486169 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.589893 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.589955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.589974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.590000 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.590017 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.642222 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:28:27.389604763 +0000 UTC Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.660264 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:45 crc kubenswrapper[4697]: E0126 00:08:45.660596 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.692813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.692846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.692854 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.692867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.692877 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.795289 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.795388 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.795412 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.795445 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.795471 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.898437 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.898487 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.898501 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.898519 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:45 crc kubenswrapper[4697]: I0126 00:08:45.898531 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:45Z","lastTransitionTime":"2026-01-26T00:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.005306 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.005372 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.005399 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.005424 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.005442 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.108801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.109118 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.109311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.109587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.109788 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.212801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.212860 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.212878 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.212901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.212918 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.315730 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.316057 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.316238 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.316370 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.316507 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.419892 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.419982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.419998 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.420022 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.420038 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.523054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.523153 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.523171 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.523198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.523215 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.626310 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.626354 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.626366 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.626383 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.626397 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.642927 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:04:48.883113167 +0000 UTC Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.660359 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.660592 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.660363 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:46 crc kubenswrapper[4697]: E0126 00:08:46.660940 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:46 crc kubenswrapper[4697]: E0126 00:08:46.661046 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:46 crc kubenswrapper[4697]: E0126 00:08:46.661222 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.728879 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.728926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.728939 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.728959 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.728974 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.831989 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.832325 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.832423 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.832540 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.832627 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.935965 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.936248 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.936342 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.936472 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:46 crc kubenswrapper[4697]: I0126 00:08:46.936599 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:46Z","lastTransitionTime":"2026-01-26T00:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.038884 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.039221 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.039287 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.039364 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.039429 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.141525 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.141562 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.141571 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.141587 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.141597 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.244942 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.244968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.244976 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.244989 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.244998 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.347817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.347883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.347901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.347926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.347945 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.450224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.450320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.450343 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.450373 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.450394 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.552733 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.552791 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.552813 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.552842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.552865 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.644425 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:10:59.659109767 +0000 UTC Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.655236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.655278 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.655290 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.655308 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.655320 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.659499 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:47 crc kubenswrapper[4697]: E0126 00:08:47.659696 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.757967 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.758009 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.758019 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.758033 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.758044 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.860917 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.860985 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.861007 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.861035 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.861058 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.951840 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:47 crc kubenswrapper[4697]: E0126 00:08:47.952132 4697 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:47 crc kubenswrapper[4697]: E0126 00:08:47.952505 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs podName:dfdab702-3a9b-4646-ad6b-9bb9404e92ad nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.952476736 +0000 UTC m=+101.589254166 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs") pod "network-metrics-daemon-xctft" (UID: "dfdab702-3a9b-4646-ad6b-9bb9404e92ad") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.963817 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.964065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.964269 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.964439 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:47 crc kubenswrapper[4697]: I0126 00:08:47.964613 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:47Z","lastTransitionTime":"2026-01-26T00:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.067823 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.067862 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.067872 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.067887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.067897 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.170608 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.171043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.171203 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.171255 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.171269 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.195611 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.195650 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.195662 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.195676 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.195687 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.212711 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.217123 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.217154 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.217165 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.217178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.217191 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.237325 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.242464 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.242500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.242511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.242524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.242533 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.260608 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.265617 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.265827 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.265836 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.265851 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.265862 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.280143 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.284183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.284232 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.284249 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.284272 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.284288 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.304033 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.304204 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.306117 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.306154 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.306164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.306178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.306187 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.409274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.409313 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.409326 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.409341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.409351 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.512350 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.512377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.512433 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.512449 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.512458 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.615118 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.615163 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.615175 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.615191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.615202 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.644812 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:58:30.443586069 +0000 UTC Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.660242 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.660320 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.660406 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.660490 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.660780 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:48 crc kubenswrapper[4697]: E0126 00:08:48.660920 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.673485 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.694327 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:34Z\\\",\\\"message\\\":\\\"on-xctft\\\\nF0126 00:08:34.500245 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z]\\\\nI0126 00:08:34.500295 6367 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xctft in node crc\\\\nI0126 00:08:34.500047 6367 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0126 00:08:34.500178 6367 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.704673 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.717179 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.717792 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.717842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.717857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.717875 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.718232 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.730041 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.740109 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.751966 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.767243 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.777526 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.792547 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.807406 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.820101 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.821284 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.821345 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.821361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.821385 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.821429 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.834139 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.844578 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.863135 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.874982 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.887181 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.924416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.924456 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.924466 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.924480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:48 crc kubenswrapper[4697]: I0126 00:08:48.924491 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:48Z","lastTransitionTime":"2026-01-26T00:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.026884 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.026930 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.026941 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.026959 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.026973 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.129988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.130046 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.130101 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.130134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.130154 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.232846 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.232874 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.232883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.232897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.232905 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.335792 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.335860 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.335882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.335910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.335930 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.438765 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.438819 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.438832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.438857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.438875 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.542657 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.542719 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.542732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.542758 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.542785 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.644571 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.644638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.644660 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.644688 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.644709 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.645320 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:26:02.840051422 +0000 UTC Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.660174 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:49 crc kubenswrapper[4697]: E0126 00:08:49.660843 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.661259 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:08:49 crc kubenswrapper[4697]: E0126 00:08:49.661512 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.747468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.747517 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.747529 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.747548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.747560 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.849861 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.849927 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.849944 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.849967 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.849985 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.952417 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.952463 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.952480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.952506 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:49 crc kubenswrapper[4697]: I0126 00:08:49.952522 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:49Z","lastTransitionTime":"2026-01-26T00:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.054460 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.054529 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.054551 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.054573 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.054590 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.156676 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.156712 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.156724 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.156738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.156748 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.258902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.258953 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.258966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.258982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.258994 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.361474 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.361545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.361568 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.361595 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.361612 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.463254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.463295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.463306 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.463323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.463334 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.565914 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.565969 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.565982 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.565997 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.566009 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.646117 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:21:54.671411767 +0000 UTC Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.660380 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.660500 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.660401 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:50 crc kubenswrapper[4697]: E0126 00:08:50.660608 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:50 crc kubenswrapper[4697]: E0126 00:08:50.660710 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:50 crc kubenswrapper[4697]: E0126 00:08:50.660899 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.669334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.669374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.669387 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.669406 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.669427 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.772412 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.772484 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.772507 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.772539 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.772564 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.875191 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.875228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.875236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.875252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.875261 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.977533 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.977593 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.977626 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.977641 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:50 crc kubenswrapper[4697]: I0126 00:08:50.977651 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:50Z","lastTransitionTime":"2026-01-26T00:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.080207 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.080279 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.080292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.080314 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.080352 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.090024 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bjlq7_638a78f4-bdb3-4d78-8faf-b4bc299717d2/kube-multus/0.log" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.090094 4697 generic.go:334] "Generic (PLEG): container finished" podID="638a78f4-bdb3-4d78-8faf-b4bc299717d2" containerID="1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69" exitCode=1 Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.090137 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bjlq7" event={"ID":"638a78f4-bdb3-4d78-8faf-b4bc299717d2","Type":"ContainerDied","Data":"1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.090610 4697 scope.go:117] "RemoveContainer" containerID="1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.117769 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.132028 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.149164 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.161947 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.175329 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.182664 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.182700 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.182713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.182733 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.182748 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.193275 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.208366 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.223917 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.238568 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.251570 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.264129 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:50Z\\\",\\\"message\\\":\\\"2026-01-26T00:08:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5722734e-0e05-4a67-839a-db335f570257\\\\n2026-01-26T00:08:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5722734e-0e05-4a67-839a-db335f570257 to /host/opt/cni/bin/\\\\n2026-01-26T00:08:05Z [verbose] multus-daemon started\\\\n2026-01-26T00:08:05Z [verbose] Readiness Indicator file check\\\\n2026-01-26T00:08:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.273095 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.285390 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.285555 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.285577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.285585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.285599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.285607 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.294402 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.316626 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:34Z\\\",\\\"message\\\":\\\"on-xctft\\\\nF0126 00:08:34.500245 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z]\\\\nI0126 00:08:34.500295 6367 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xctft in node crc\\\\nI0126 00:08:34.500047 6367 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0126 00:08:34.500178 6367 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.328729 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.341780 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.387993 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.388102 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.388122 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.388178 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.388196 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.490158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.490218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.490241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.490305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.490329 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.591976 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.592008 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.592017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.592029 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.592037 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.646422 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:52:20.5174331 +0000 UTC Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.659716 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:51 crc kubenswrapper[4697]: E0126 00:08:51.659846 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.693868 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.693926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.693937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.693951 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.693992 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.796605 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.796653 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.796666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.796683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.796697 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.899893 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.900341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.900501 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.900633 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:51 crc kubenswrapper[4697]: I0126 00:08:51.900727 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:51Z","lastTransitionTime":"2026-01-26T00:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.003395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.003446 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.003459 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.003478 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.003491 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.095779 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bjlq7_638a78f4-bdb3-4d78-8faf-b4bc299717d2/kube-multus/0.log" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.095876 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bjlq7" event={"ID":"638a78f4-bdb3-4d78-8faf-b4bc299717d2","Type":"ContainerStarted","Data":"826e1d4598d9301073e86a13701fc4475d515c4911462d2d7299a6f7fdec3ee5"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.106156 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.106208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.106218 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.106237 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.106250 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.107543 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.120220 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826e1d4598d9301073e86a13701fc4475d515c4911462d2d7299a6f7fdec3ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:50Z\\\",\\\"message\\\":\\\"2026-01-26T00:08:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5722734e-0e05-4a67-839a-db335f570257\\\\n2026-01-26T00:08:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5722734e-0e05-4a67-839a-db335f570257 to /host/opt/cni/bin/\\\\n2026-01-26T00:08:05Z [verbose] multus-daemon started\\\\n2026-01-26T00:08:05Z [verbose] Readiness Indicator file check\\\\n2026-01-26T00:08:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.134963 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.151493 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.168363 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.187225 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.202855 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.209953 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.210003 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.210011 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.210027 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.210038 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.213429 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.228313 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.257831 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:34Z\\\",\\\"message\\\":\\\"on-xctft\\\\nF0126 00:08:34.500245 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z]\\\\nI0126 00:08:34.500295 6367 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xctft in node crc\\\\nI0126 00:08:34.500047 6367 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0126 00:08:34.500178 6367 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.275503 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.290377 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.302240 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.312499 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.312552 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.312567 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.312591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.312606 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.321940 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.338858 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.363439 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.382693 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.416201 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.416288 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.416302 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.416323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.416335 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.519560 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.520148 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.520312 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.520502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.520658 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.623713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.623790 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.623810 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.623839 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.623897 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.647347 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:28:28.879806058 +0000 UTC Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.659885 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.660035 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.660159 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:52 crc kubenswrapper[4697]: E0126 00:08:52.660143 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:52 crc kubenswrapper[4697]: E0126 00:08:52.660352 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:52 crc kubenswrapper[4697]: E0126 00:08:52.660453 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.727292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.727335 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.727347 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.727368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.727380 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.829287 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.829328 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.829339 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.829355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.829367 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.931966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.932004 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.932016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.932032 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:52 crc kubenswrapper[4697]: I0126 00:08:52.932044 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:52Z","lastTransitionTime":"2026-01-26T00:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.034368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.034410 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.034424 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.034442 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.034454 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.136589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.136666 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.136683 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.136705 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.136721 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.239203 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.239305 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.239320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.239346 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.239366 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.342279 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.342348 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.342368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.342393 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.342411 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.445282 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.445337 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.445356 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.445378 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.445397 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.548960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.549024 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.549040 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.549065 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.549105 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.647505 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:06:32.380044361 +0000 UTC Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.651891 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.651941 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.651959 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.651985 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.652002 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.660345 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:53 crc kubenswrapper[4697]: E0126 00:08:53.660509 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.754354 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.754420 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.754443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.754471 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.754495 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.857253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.857317 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.857341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.857369 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.857392 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.960405 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.960492 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.960516 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.960545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:53 crc kubenswrapper[4697]: I0126 00:08:53.960567 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:53Z","lastTransitionTime":"2026-01-26T00:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.064117 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.064176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.064200 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.064228 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.064254 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.166643 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.166678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.166690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.166704 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.166715 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.269295 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.269323 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.269333 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.269347 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.269357 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.371638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.371669 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.371679 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.371692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.371702 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.473737 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.473996 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.474095 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.474177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.474246 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.577127 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.577164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.577175 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.577190 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.577201 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.648345 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:58:53.617663224 +0000 UTC Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.659697 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.659781 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.659847 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:54 crc kubenswrapper[4697]: E0126 00:08:54.660289 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:54 crc kubenswrapper[4697]: E0126 00:08:54.660532 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:54 crc kubenswrapper[4697]: E0126 00:08:54.660714 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.681274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.681340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.681369 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.681415 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.681454 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.784870 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.784920 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.784934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.784955 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.784969 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.887334 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.887401 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.887416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.887438 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.887456 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.991597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.991697 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.991716 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.991742 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:54 crc kubenswrapper[4697]: I0126 00:08:54.991759 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:54Z","lastTransitionTime":"2026-01-26T00:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.095025 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.095109 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.095129 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.095153 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.095171 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.198167 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.198225 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.198285 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.198308 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.198326 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.301497 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.301585 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.301607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.301639 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.301665 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.404769 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.404882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.404901 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.404926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.404944 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.507738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.507899 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.507931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.507960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.507986 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.610837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.610911 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.610922 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.610959 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.610972 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.649907 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:57:04.375801157 +0000 UTC Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.660375 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:55 crc kubenswrapper[4697]: E0126 00:08:55.660600 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.714708 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.714771 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.714781 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.714801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.714813 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.818045 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.818208 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.818229 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.818254 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.818271 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.922492 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.922537 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.922547 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.922564 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:55 crc kubenswrapper[4697]: I0126 00:08:55.922575 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:55Z","lastTransitionTime":"2026-01-26T00:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.025277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.025349 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.025363 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.025386 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.025423 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.128226 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.128283 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.128299 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.128318 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.128332 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.231131 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.231176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.231188 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.231206 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.231219 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.334676 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.334747 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.334762 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.334778 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.334791 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.437277 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.437341 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.437362 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.437397 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.437419 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.540625 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.541061 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.541352 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.541720 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.541984 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.645051 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.645140 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.645158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.645183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.645201 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.650229 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:04:37.156435229 +0000 UTC Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.659705 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:56 crc kubenswrapper[4697]: E0126 00:08:56.659859 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.659917 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:56 crc kubenswrapper[4697]: E0126 00:08:56.660104 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.660546 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:56 crc kubenswrapper[4697]: E0126 00:08:56.660923 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.747814 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.747861 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.747872 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.747887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.747896 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.850933 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.850963 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.850971 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.850984 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.850994 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.953443 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.953505 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.953522 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.953548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:56 crc kubenswrapper[4697]: I0126 00:08:56.953567 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:56Z","lastTransitionTime":"2026-01-26T00:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.056480 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.056985 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.057017 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.057048 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.057122 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.160183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.160246 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.160264 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.160289 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.160309 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.263112 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.263163 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.263177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.263193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.263204 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.366760 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.366838 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.366856 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.366882 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.366900 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.469974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.470021 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.470032 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.470048 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.470060 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.573366 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.573483 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.573503 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.573527 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.573544 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.651673 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 22:10:23.210794648 +0000 UTC Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.660287 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:57 crc kubenswrapper[4697]: E0126 00:08:57.660463 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.676677 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.676753 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.676774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.676799 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.676817 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.778867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.778937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.778960 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.778988 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.779011 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.881463 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.881524 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.881541 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.881567 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.881583 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.984297 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.984372 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.984396 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.984430 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:57 crc kubenswrapper[4697]: I0126 00:08:57.984453 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:57Z","lastTransitionTime":"2026-01-26T00:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.087239 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.087311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.087328 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.087351 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.087368 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.189400 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.189448 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.189460 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.189476 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.189490 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.292713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.292764 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.292778 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.292794 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.292806 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.384299 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.384343 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.384355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.384373 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.384388 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.400374 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.404578 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.404609 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.404619 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.404638 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.404651 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.417889 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.422687 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.422718 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.422730 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.422745 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.422755 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.442418 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.446857 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.447198 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.447399 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.447592 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.447800 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.468769 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.473054 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.473232 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.473339 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.473430 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.473514 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.490606 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"531db5dc-eb17-4614-9e49-846436142df2\\\",\\\"systemUUID\\\":\\\"715d4b12-1cfe-46a5-a312-3c756db76134\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.490841 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.492734 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.492777 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.492793 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.492815 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.492832 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.594743 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.594833 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.594848 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.594866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.594879 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.652431 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:55:54.032931008 +0000 UTC Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.659959 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.660009 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.660111 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.660221 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.660289 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:08:58 crc kubenswrapper[4697]: E0126 00:08:58.660359 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.676949 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.690007 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6ddpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b26ea82-1613-4153-8587-2e598acccba0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85ef280989dc0ca737f9cee365eb1a7ed95867b6ee22036156f03d512d62eb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtgsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6ddpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.698274 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.698561 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.698837 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.699179 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.699471 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.715124 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-bjlq7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"638a78f4-bdb3-4d78-8faf-b4bc299717d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://826e1d4598d9301073e86a13701fc4475d515c4911462d2d7299a6f7fdec3ee5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:50Z\\\",\\\"message\\\":\\\"2026-01-26T00:08:05+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5722734e-0e05-4a67-839a-db335f570257\\\\n2026-01-26T00:08:05+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5722734e-0e05-4a67-839a-db335f570257 to /host/opt/cni/bin/\\\\n2026-01-26T00:08:05Z [verbose] multus-daemon started\\\\n2026-01-26T00:08:05Z [verbose] Readiness Indicator file check\\\\n2026-01-26T00:08:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rbsfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-bjlq7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.729387 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xctft" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92hq4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xctft\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.748580 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bab085c4-e67b-4f15-bf7a-38646054f603\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c268a78042d90aa6b0ca1108a8d9593c7998e3f3e8dde07cfbbae1bc3c1bc6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da157a5fb287fc146f98c70968430c1ad9d73bfd157dfebd13685bc3377aa3a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a4941055e85e639a68cc6795aecac330892d945f47a937a90ff3c11d91cc5e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.768719 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.781139 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc7fe5485fa76fda0c77c69f93029543c0ea7758fa27f81a2bbc5eb4700d19dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53e6307d5ac2dd654322968317982f578ddf434dbf75b4f19c71024d9556b53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.790715 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2d3adb1-27d5-4fa0-a85e-35000080ac39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d6a83df8eb96ebb8bd347d272adfaa0e36592ac1d0a1015c17a30bbef1a8f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g9ws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mb5j7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.800193 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccf0abe4-7d69-43ad-aa11-747002f33846\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dc014ddc117cfbf525b2c14dca15500d043dabbaeae79b7cd1c2175f102d6db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd1eacca68e9c0586608851c250e8f55db39f0689470ac19a0451e1fcd8e28ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bdvzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-skrqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.802651 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.802686 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.802697 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.802712 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.802722 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.810380 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://71d790d183f6568afaaad552493b43761ddc8addd4dced04d0b342c1ef4a9d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.827678 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T00:08:34Z\\\",\\\"message\\\":\\\"on-xctft\\\\nF0126 00:08:34.500245 6367 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:34Z is after 2025-08-24T17:21:41Z]\\\\nI0126 00:08:34.500295 6367 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-xctft in node crc\\\\nI0126 00:08:34.500047 6367 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0126 00:08:34.500178 6367 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h7x5s_openshift-ovn-kubernetes(9b97fcec-14c2-49b1-bdc5-762e1b42d7a4)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtbg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7x5s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.843832 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7725f0ae7fbdc1823810e6e310e9f5fe48b2caeed30ca6c415ddc940703cbf0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.859025 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.869397 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bgwmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2134310-ccdf-4e23-bb12-123af52cc758\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683a86f77ab068e76efb51ac5e603f792e1b38b3c09e4b82a4de4b707989ab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lxmzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bgwmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.881163 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T00:07:56Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 00:07:51.026649 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 00:07:51.037199 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2051362258/tls.crt::/tmp/serving-cert-2051362258/tls.key\\\\\\\"\\\\nI0126 00:07:56.369415 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 00:07:56.375804 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 00:07:56.375835 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 00:07:56.375870 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 00:07:56.375879 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 00:07:56.382040 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 00:07:56.382086 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 00:07:56.382094 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0126 00:07:56.382096 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0126 00:07:56.382102 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 00:07:56.382107 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 00:07:56.382111 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 00:07:56.382115 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0126 00:07:56.383786 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.893555 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f7a6702-cb87-41a2-9d1b-0d1a12f20cc5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:07:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a2eb516bd21ac2a7d5ad11802808a2b31d4cfbe8c618772790f1d3c77da3d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a72ab779cb9d881c185d408cc96f22469a29933e83e8c06363b3e6c5965ada4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaec2bbb5e31b34b927d10ff152ccd0a1a451f8a000595033f64161660e4795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://510e2e1dbc8e7ebe6b88b9a5051f9e2c23fe34b07007cf89ee6e60424b571a4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:07:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:07:39Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:07:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.904883 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.904907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.904915 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.904928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.904937 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:58Z","lastTransitionTime":"2026-01-26T00:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:58 crc kubenswrapper[4697]: I0126 00:08:58.908545 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6e81e9c-cc13-478a-91ce-6ad9d9c7d716\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T00:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://349fa1d8ab48d7c34ea936963bd730fff009436314c032c4ed96cb0171a2b3b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T00:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b4673686aa51d2d68118ea484fa2859e62624b3ad6de757b11fb4325e7c8086\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70972d6580a0990ce30930b8b0ae02f8e0e2d74b78e41d68f64f78e650e5ddd9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://192c933e761ff92ad6ed384bd2d2bc0a6e3a83ec384511e96793ae71f37cc6fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0760a150d5e27b599fb63f21b4b2add5ed87297b2c91a56325239adad82f6dbe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b72c066e6d7a5d4ec99db2835be8893a9caa1ceff33cb0ff875ee10445fc956\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fbc9d50a19a2e8939caad7f30888798c3195bcc70472c66271cddb53ef43b1e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h725n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T00:08:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-sb8k8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T00:08:58Z is after 2025-08-24T17:21:41Z" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.008565 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.008618 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.008635 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.008657 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.008673 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.111786 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.111869 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.111896 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.111928 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.111954 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.215399 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.215468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.215485 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.215510 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.215528 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.319427 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.319491 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.319514 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.319541 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.319562 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.422193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.422241 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.422252 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.422268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.422281 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.524591 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.524637 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.524649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.524664 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.524677 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.628489 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.628562 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.628584 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.628615 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.628637 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.653120 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:48:33.412078312 +0000 UTC Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.660547 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:08:59 crc kubenswrapper[4697]: E0126 00:08:59.660929 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.731177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.731296 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.731320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.731350 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.731374 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.834672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.834740 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.834759 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.834783 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.834801 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.937897 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.937962 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.937987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.938018 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:08:59 crc kubenswrapper[4697]: I0126 00:08:59.938041 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:08:59Z","lastTransitionTime":"2026-01-26T00:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.041001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.041058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.041109 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.041129 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.041141 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.144907 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.144961 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.145002 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.145020 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.145039 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.248693 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.248767 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.248790 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.248816 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.248835 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.351757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.351826 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.351842 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.351865 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.351883 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.454812 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.454890 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.454910 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.454934 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.454954 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.557996 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.558049 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.558060 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.558110 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.558133 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.586981 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.587272 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:10:04.587233407 +0000 UTC m=+146.224010837 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.653806 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:43:18.50412441 +0000 UTC Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.660109 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.660211 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.660250 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.660359 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.660568 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.660635 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.661183 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.661394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.661421 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.661448 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.661477 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.680636 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.688143 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.688230 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.688295 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688314 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.688334 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688350 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688366 4697 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688429 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:04.688411911 +0000 UTC m=+146.325189311 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688427 4697 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688474 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688505 4697 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688512 4697 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688533 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:04.688505474 +0000 UTC m=+146.325282874 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688540 4697 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688617 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:04.688593057 +0000 UTC m=+146.325370487 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 00:09:00 crc kubenswrapper[4697]: E0126 00:09:00.688647 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 00:10:04.688634118 +0000 UTC m=+146.325411548 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.764268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.764320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.764333 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.764350 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.764362 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.867468 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.867547 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.867569 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.867599 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.867623 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.970931 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.970987 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.971005 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.971025 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:00 crc kubenswrapper[4697]: I0126 00:09:00.971037 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:00Z","lastTransitionTime":"2026-01-26T00:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.074474 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.074559 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.074577 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.074607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.074632 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.177755 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.177853 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.177877 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.177906 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.177923 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.280408 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.280475 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.280498 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.280526 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.280548 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.383546 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.383659 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.383679 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.383702 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.383721 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.485975 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.486067 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.486128 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.486166 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.486186 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.589713 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.589759 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.589771 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.589788 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.589800 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.654442 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:36:24.638244302 +0000 UTC Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.659975 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:01 crc kubenswrapper[4697]: E0126 00:09:01.660254 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.692195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.692256 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.692269 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.692287 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.692299 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.795465 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.795502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.795514 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.795530 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.795558 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.899659 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.899738 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.899801 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.899832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:01 crc kubenswrapper[4697]: I0126 00:09:01.899854 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:01Z","lastTransitionTime":"2026-01-26T00:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.002597 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.002652 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.002668 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.002693 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.002709 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.105867 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.105946 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.105977 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.106006 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.106026 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.208952 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.209034 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.209055 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.209164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.209190 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.311820 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.311860 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.311871 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.311887 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.311898 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.414545 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.414620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.414634 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.414660 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.414682 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.517609 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.517681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.517698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.517725 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.517742 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.620292 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.620358 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.620374 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.620395 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.620410 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.654917 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:07:12.881067581 +0000 UTC Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.660277 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.660335 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:09:02 crc kubenswrapper[4697]: E0126 00:09:02.660422 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:09:02 crc kubenswrapper[4697]: E0126 00:09:02.660545 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.660646 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:02 crc kubenswrapper[4697]: E0126 00:09:02.662133 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.723548 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.723614 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.723631 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.723653 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.723670 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.825448 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.825496 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.825509 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.825526 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.825562 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.928885 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.929019 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.929039 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.929100 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:02 crc kubenswrapper[4697]: I0126 00:09:02.929120 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:02Z","lastTransitionTime":"2026-01-26T00:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.032968 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.033043 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.033106 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.033143 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.033166 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.136056 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.136167 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.136193 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.136224 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.136247 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.239100 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.239194 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.239212 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.239234 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.239251 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.342552 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.342649 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.342672 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.342703 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.342727 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.446253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.446311 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.446328 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.446352 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.446368 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.549170 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.549236 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.549259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.549294 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.549316 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.652324 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.652401 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.652427 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.652457 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.652480 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.655549 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:06:31.50374431 +0000 UTC Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.660354 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:03 crc kubenswrapper[4697]: E0126 00:09:03.660598 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.756258 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.756320 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.756336 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.756361 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.756378 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.859849 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.859922 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.859944 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.859974 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.859998 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.963180 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.963223 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.963231 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.963244 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:03 crc kubenswrapper[4697]: I0126 00:09:03.963253 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:03Z","lastTransitionTime":"2026-01-26T00:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.067185 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.067244 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.067259 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.067281 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.067299 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.169902 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.169962 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.170118 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.170145 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.170164 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.273431 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.273488 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.273508 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.273531 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.273549 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.376746 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.376806 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.376847 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.376885 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.376903 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.479595 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.479665 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.479690 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.479719 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.479740 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.583037 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.583150 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.583174 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.583203 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.583226 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.657007 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:01:23.682001129 +0000 UTC Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.660538 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.660684 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.661226 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:04 crc kubenswrapper[4697]: E0126 00:09:04.661410 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:09:04 crc kubenswrapper[4697]: E0126 00:09:04.661575 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:09:04 crc kubenswrapper[4697]: E0126 00:09:04.661751 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.662111 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.686607 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.686675 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.686700 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.686732 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.686757 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.789983 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.790036 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.790056 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.790108 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.790133 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.893016 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.893148 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.893176 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.893211 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.893229 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.995978 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.996048 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.996066 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.996134 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:04 crc kubenswrapper[4697]: I0126 00:09:04.996153 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:04Z","lastTransitionTime":"2026-01-26T00:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.099803 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.099866 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.099879 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.099904 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.099919 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.203377 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.203426 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.203438 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.203458 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.203471 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.307278 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.307340 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.307352 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.307375 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.307389 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.411697 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.411748 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.411757 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.411774 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.411783 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.515147 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.515197 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.515215 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.515239 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.515256 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.619332 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.619394 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.619411 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.619435 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.619452 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.657714 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:32:26.334017767 +0000 UTC Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.660465 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:05 crc kubenswrapper[4697]: E0126 00:09:05.660683 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.723004 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.723046 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.723058 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.723099 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.723113 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.825056 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.825124 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.825139 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.825160 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.825176 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.927821 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.927856 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.927864 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.927878 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:05 crc kubenswrapper[4697]: I0126 00:09:05.927886 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:05Z","lastTransitionTime":"2026-01-26T00:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.030636 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.030709 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.030727 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.030755 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.030777 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.132937 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.132977 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.132989 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.133003 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.133013 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.145478 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/2.log" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.148421 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerStarted","Data":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.149040 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.176635 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=6.176617104 podStartE2EDuration="6.176617104s" podCreationTimestamp="2026-01-26 00:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.17648542 +0000 UTC m=+87.813262830" watchObservedRunningTime="2026-01-26 00:09:06.176617104 +0000 UTC m=+87.813394494" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.205906 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=70.205886509 podStartE2EDuration="1m10.205886509s" podCreationTimestamp="2026-01-26 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.194161928 +0000 UTC m=+87.830939318" watchObservedRunningTime="2026-01-26 00:09:06.205886509 +0000 UTC m=+87.842663909" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.219625 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.219606019 podStartE2EDuration="38.219606019s" podCreationTimestamp="2026-01-26 00:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.206767515 +0000 UTC m=+87.843544905" watchObservedRunningTime="2026-01-26 00:09:06.219606019 +0000 UTC m=+87.856383409" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.234742 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.234790 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.234802 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.234818 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.234829 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.257837 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bgwmq" podStartSLOduration=66.257817681 podStartE2EDuration="1m6.257817681s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.243200394 +0000 UTC m=+87.879977784" watchObservedRunningTime="2026-01-26 00:09:06.257817681 +0000 UTC m=+87.894595091" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.257944 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-sb8k8" podStartSLOduration=65.257939174 podStartE2EDuration="1m5.257939174s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.257901113 +0000 UTC m=+87.894678523" watchObservedRunningTime="2026-01-26 00:09:06.257939174 +0000 UTC m=+87.894716574" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.274924 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.274905571 podStartE2EDuration="1m8.274905571s" podCreationTimestamp="2026-01-26 00:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.274767217 +0000 UTC m=+87.911544617" watchObservedRunningTime="2026-01-26 00:09:06.274905571 +0000 UTC m=+87.911682971" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.296572 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xctft"] Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.296661 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:06 crc kubenswrapper[4697]: E0126 00:09:06.296737 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.330496 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6ddpl" podStartSLOduration=66.330476572 podStartE2EDuration="1m6.330476572s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.330149602 +0000 UTC m=+87.966926992" watchObservedRunningTime="2026-01-26 00:09:06.330476572 +0000 UTC m=+87.967253952" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.336494 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.336555 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.336571 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.336589 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.336601 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.363986 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-bjlq7" podStartSLOduration=65.363971073 podStartE2EDuration="1m5.363971073s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.351914883 +0000 UTC m=+87.988692303" watchObservedRunningTime="2026-01-26 00:09:06.363971073 +0000 UTC m=+88.000748463" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.428162 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podStartSLOduration=65.428137211 podStartE2EDuration="1m5.428137211s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.426235274 +0000 UTC m=+88.063012664" watchObservedRunningTime="2026-01-26 00:09:06.428137211 +0000 UTC m=+88.064914641" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.439511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.439550 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.439561 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.439578 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.439590 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.440115 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podStartSLOduration=66.440097778 podStartE2EDuration="1m6.440097778s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.439033257 +0000 UTC m=+88.075810657" watchObservedRunningTime="2026-01-26 00:09:06.440097778 +0000 UTC m=+88.076875178" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.542162 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.542491 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.542500 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.542518 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.542528 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.644315 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.644347 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.644355 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.644368 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.644376 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.658845 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:29:16.982376696 +0000 UTC Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.660206 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:09:06 crc kubenswrapper[4697]: E0126 00:09:06.660305 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.660314 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.660206 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:09:06 crc kubenswrapper[4697]: E0126 00:09:06.660367 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:09:06 crc kubenswrapper[4697]: E0126 00:09:06.660420 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.746508 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.746558 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.746570 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.746586 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.746596 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.849788 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.849832 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.849843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.849859 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.849870 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.952164 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.952195 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.952205 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.952217 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:06 crc kubenswrapper[4697]: I0126 00:09:06.952227 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:06Z","lastTransitionTime":"2026-01-26T00:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.054130 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.054158 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.054165 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.054177 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.054186 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.156872 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.156926 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.156944 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.156966 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.156984 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.260153 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.260230 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.260253 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.260280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.260301 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.363338 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.363405 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.363416 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.363429 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.363438 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.466425 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.466507 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.466528 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.466562 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.466587 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.569391 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.569457 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.569477 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.569502 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.569521 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.659945 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:34:54.247139117 +0000 UTC Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.660238 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:07 crc kubenswrapper[4697]: E0126 00:09:07.660716 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xctft" podUID="dfdab702-3a9b-4646-ad6b-9bb9404e92ad" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.671973 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.672041 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.672063 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.672124 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.672149 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.775763 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.775843 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.775861 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.775885 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.775902 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.878588 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.878662 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.878678 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.878702 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.878721 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.982187 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.982268 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.982282 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.982304 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:07 crc kubenswrapper[4697]: I0126 00:09:07.982319 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:07Z","lastTransitionTime":"2026-01-26T00:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.086421 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.086498 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.086511 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.086526 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.086537 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.189888 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.189954 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.189971 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.190001 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.190019 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.298108 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.298175 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.298210 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.298234 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.298251 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.401658 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.401745 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.401769 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.401799 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.401823 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.504620 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.504692 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.504711 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.504789 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.504810 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.608173 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.608240 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.608257 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.608280 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.608297 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.660423 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:11:33.451795703 +0000 UTC Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.660691 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.660765 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.660961 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:09:08 crc kubenswrapper[4697]: E0126 00:09:08.662477 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 00:09:08 crc kubenswrapper[4697]: E0126 00:09:08.662677 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 00:09:08 crc kubenswrapper[4697]: E0126 00:09:08.662780 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.710655 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.710729 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.710749 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.710772 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.710790 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.813617 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.813681 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.813698 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.813722 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.813740 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.837579 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.837665 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.837693 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.837725 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.837746 4697 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T00:09:08Z","lastTransitionTime":"2026-01-26T00:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.904578 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-skrqt" podStartSLOduration=67.904558601 podStartE2EDuration="1m7.904558601s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:06.451724726 +0000 UTC m=+88.088502116" watchObservedRunningTime="2026-01-26 00:09:08.904558601 +0000 UTC m=+90.541336001" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.904991 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858"] Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.905530 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.909224 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.909438 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.909443 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.909583 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.979406 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/91099126-c16a-4403-8863-26ec02407c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.979818 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/91099126-c16a-4403-8863-26ec02407c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.979887 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91099126-c16a-4403-8863-26ec02407c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.979921 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91099126-c16a-4403-8863-26ec02407c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:08 crc kubenswrapper[4697]: I0126 00:09:08.979954 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91099126-c16a-4403-8863-26ec02407c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.080960 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/91099126-c16a-4403-8863-26ec02407c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.081033 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/91099126-c16a-4403-8863-26ec02407c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.081123 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91099126-c16a-4403-8863-26ec02407c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.081138 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/91099126-c16a-4403-8863-26ec02407c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.081214 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/91099126-c16a-4403-8863-26ec02407c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.081156 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91099126-c16a-4403-8863-26ec02407c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.081331 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91099126-c16a-4403-8863-26ec02407c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.082556 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/91099126-c16a-4403-8863-26ec02407c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.091135 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91099126-c16a-4403-8863-26ec02407c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.105606 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/91099126-c16a-4403-8863-26ec02407c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xh858\" (UID: \"91099126-c16a-4403-8863-26ec02407c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.123412 4697 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.123747 4697 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.176388 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccrgz"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.177265 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.178290 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.179188 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.180189 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.180494 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.180533 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.181163 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.181461 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.181859 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.182731 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dsh82"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.183503 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.200141 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.200644 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.201278 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.201330 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.212119 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9dqv8"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.216050 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.216249 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.217929 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.219104 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.219253 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.219412 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.219427 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.219835 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220050 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220335 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220480 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220375 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220702 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220416 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220598 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220654 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.220825 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.223279 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.223891 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29489760-4chhm"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.224464 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.223361 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.227124 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.221434 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.221519 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.221577 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.221710 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.221769 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.221826 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.222868 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.228014 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.228280 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xr4t2"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.229524 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-f2kkh"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.239041 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.240532 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.240970 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.242403 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.242921 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.243230 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6wtq9"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.243686 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.244224 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.244450 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.248216 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.250012 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.254997 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sljx9"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.256232 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.256619 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jwzr6"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.256847 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257114 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.255869 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.258653 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257251 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257464 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257506 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.260531 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.260692 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.260858 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.263956 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.265981 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.267094 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257541 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257605 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.268407 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257650 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257758 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257779 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.270320 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.273830 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.257842 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.296447 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.297115 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.297285 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.297547 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.305980 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.329061 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.329476 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.329577 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.329678 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.329772 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.331455 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.331591 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.331691 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.331893 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.333332 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.333592 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.333689 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.333780 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.333867 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.334060 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.334267 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.334364 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.334472 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.334763 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.334850 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.334955 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335117 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xls7q"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335278 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-etcd-client\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335330 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d03320a-a15c-401c-8b78-1acddefe4192-audit-dir\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335358 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335404 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqns8\" (UniqueName: \"kubernetes.io/projected/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-kube-api-access-xqns8\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335434 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3173a91-8514-41ed-9843-c674b3b1fd75-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335460 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjjlx\" (UniqueName: \"kubernetes.io/projected/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-kube-api-access-wjjlx\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335486 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gjzz\" (UniqueName: \"kubernetes.io/projected/34ce2092-249b-4b00-8e7a-46fa672982f5-kube-api-access-2gjzz\") pod \"downloads-7954f5f757-f2kkh\" (UID: \"34ce2092-249b-4b00-8e7a-46fa672982f5\") " pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335513 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-encryption-config\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335538 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-serving-cert\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335564 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335573 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335587 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25brg\" (UniqueName: \"kubernetes.io/projected/cdef5f7e-6a3f-4221-8b29-0e09630d845b-kube-api-access-25brg\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335610 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b263338c-971c-47d4-8d7c-e21e0a1c22cc-serving-cert\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335634 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxnz5\" (UniqueName: \"kubernetes.io/projected/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-kube-api-access-vxnz5\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335660 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdef5f7e-6a3f-4221-8b29-0e09630d845b-machine-approver-tls\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335684 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-node-pullsecrets\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335705 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/020261ba-2461-414a-a39f-67c4b23d1d2a-serving-cert\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335728 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dbdh\" (UniqueName: \"kubernetes.io/projected/e3173a91-8514-41ed-9843-c674b3b1fd75-kube-api-access-9dbdh\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335754 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335777 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-audit\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335799 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-client-ca\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335825 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335848 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6wr\" (UniqueName: \"kubernetes.io/projected/7a5c64d6-6db8-486b-9d26-0b46adccec09-kube-api-access-4v6wr\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335874 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-images\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335900 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335949 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-config\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335978 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-dir\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336013 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336040 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336068 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336119 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5c64d6-6db8-486b-9d26-0b46adccec09-serving-cert\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336148 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-config\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336170 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336197 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336222 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-serving-cert\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336248 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjq9t\" (UniqueName: \"kubernetes.io/projected/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-kube-api-access-wjq9t\") pod \"image-pruner-29489760-4chhm\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336328 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxwb\" (UniqueName: \"kubernetes.io/projected/c66eb9c1-ad69-4acc-8d3b-82050eee2656-kube-api-access-mzxwb\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336353 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdef5f7e-6a3f-4221-8b29-0e09630d845b-auth-proxy-config\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336382 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-config\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336409 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/020261ba-2461-414a-a39f-67c4b23d1d2a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336438 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336463 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp87r\" (UniqueName: \"kubernetes.io/projected/020261ba-2461-414a-a39f-67c4b23d1d2a-kube-api-access-cp87r\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336486 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-serving-cert\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336510 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336538 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l4vx\" (UniqueName: \"kubernetes.io/projected/1d03320a-a15c-401c-8b78-1acddefe4192-kube-api-access-2l4vx\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336580 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.335519 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336609 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336745 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b263338c-971c-47d4-8d7c-e21e0a1c22cc-config\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336785 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqdg4\" (UniqueName: \"kubernetes.io/projected/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-kube-api-access-qqdg4\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336812 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-policies\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336836 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-etcd-client\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336861 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-client-ca\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336893 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3173a91-8514-41ed-9843-c674b3b1fd75-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336935 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336992 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337004 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-audit-policies\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337030 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-audit-dir\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337038 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337092 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.336951 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-cqnrq"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337124 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-image-import-ca\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337149 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337199 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337238 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337283 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rmq\" (UniqueName: \"kubernetes.io/projected/b263338c-971c-47d4-8d7c-e21e0a1c22cc-kube-api-access-h8rmq\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337333 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-config\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337365 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337362 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337533 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-encryption-config\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337563 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b263338c-971c-47d4-8d7c-e21e0a1c22cc-trusted-ca\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337579 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-serviceca\") pod \"image-pruner-29489760-4chhm\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337595 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdef5f7e-6a3f-4221-8b29-0e09630d845b-config\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337834 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.337951 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.338361 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.338439 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.338508 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.338547 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.341186 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.341688 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.344504 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.344925 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.344955 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.345243 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.345506 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.348194 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pcl4f"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.348751 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.349580 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.350277 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.350739 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.352943 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.353423 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.353680 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.350763 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.354206 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.352204 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.353365 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.354605 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.354786 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.355968 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.356380 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.358556 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.361293 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.362700 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.363206 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.370298 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.370911 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.372055 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.372728 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.373118 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.374119 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.374240 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.374785 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.374799 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.375044 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.375913 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.376440 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cj2ql"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.377558 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.379530 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kqzm4"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.379653 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.380294 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.380808 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccrgz"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.382805 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.384498 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.385197 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.386156 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9f8xv"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.386635 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.388004 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.389418 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.396716 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.399302 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kdtgd"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.399466 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.400142 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.400304 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.401336 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x98gl"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.401426 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.402190 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.402254 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.403239 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f2kkh"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.403341 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jwzr6"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.403413 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.403472 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.403383 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.403591 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.403892 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-4chhm"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.404872 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.405840 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dsh82"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.406773 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.408670 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-ng9sq"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.410234 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sljx9"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.410555 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.411231 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.412172 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.413865 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.415798 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xr4t2"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.417162 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6wtq9"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.419640 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.420428 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.422905 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cj2ql"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.424381 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.426172 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xls7q"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.427625 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.429326 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9dqv8"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.430995 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.432774 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9f8xv"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.434407 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.436578 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.437885 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438333 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l4vx\" (UniqueName: \"kubernetes.io/projected/1d03320a-a15c-401c-8b78-1acddefe4192-kube-api-access-2l4vx\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438381 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438414 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438445 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b263338c-971c-47d4-8d7c-e21e0a1c22cc-config\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438506 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqdg4\" (UniqueName: \"kubernetes.io/projected/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-kube-api-access-qqdg4\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438537 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-policies\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438560 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-etcd-client\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438586 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-client-ca\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438614 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3173a91-8514-41ed-9843-c674b3b1fd75-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438634 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55f74\" (UniqueName: \"kubernetes.io/projected/50f20a9a-ecce-45dd-9377-916c0a0ea723-kube-api-access-55f74\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438659 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-audit-policies\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438674 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-audit-dir\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438700 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1c2901-7d74-433a-a5d2-12627b087bf2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438720 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438735 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-image-import-ca\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438751 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438772 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad1c2901-7d74-433a-a5d2-12627b087bf2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438791 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438808 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438824 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rmq\" (UniqueName: \"kubernetes.io/projected/b263338c-971c-47d4-8d7c-e21e0a1c22cc-kube-api-access-h8rmq\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438842 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438858 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-trusted-ca-bundle\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438888 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-config\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438905 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09ef18a7-b302-478d-8d37-1be66e4c6886-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438923 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438941 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-encryption-config\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438965 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b263338c-971c-47d4-8d7c-e21e0a1c22cc-trusted-ca\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438983 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-serviceca\") pod \"image-pruner-29489760-4chhm\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.438999 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdef5f7e-6a3f-4221-8b29-0e09630d845b-config\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439014 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be05e353-5f84-4beb-9f70-959589984e32-serving-cert\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439032 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r6dr\" (UniqueName: \"kubernetes.io/projected/be05e353-5f84-4beb-9f70-959589984e32-kube-api-access-7r6dr\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439050 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-default-certificate\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439089 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-oauth-config\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439111 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-etcd-client\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439126 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d03320a-a15c-401c-8b78-1acddefe4192-audit-dir\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439143 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439183 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqns8\" (UniqueName: \"kubernetes.io/projected/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-kube-api-access-xqns8\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439203 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpk9s\" (UniqueName: \"kubernetes.io/projected/62705bdc-1645-4a2b-b385-e089933f0f9f-kube-api-access-wpk9s\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439219 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-serving-cert\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439235 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3173a91-8514-41ed-9843-c674b3b1fd75-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439252 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjjlx\" (UniqueName: \"kubernetes.io/projected/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-kube-api-access-wjjlx\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439270 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gjzz\" (UniqueName: \"kubernetes.io/projected/34ce2092-249b-4b00-8e7a-46fa672982f5-kube-api-access-2gjzz\") pod \"downloads-7954f5f757-f2kkh\" (UID: \"34ce2092-249b-4b00-8e7a-46fa672982f5\") " pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439287 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ef18a7-b302-478d-8d37-1be66e4c6886-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439313 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-encryption-config\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439332 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-serving-cert\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439340 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-etcd-serving-ca\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439350 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-config\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439417 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-service-ca-bundle\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439461 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439491 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25brg\" (UniqueName: \"kubernetes.io/projected/cdef5f7e-6a3f-4221-8b29-0e09630d845b-kube-api-access-25brg\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439515 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b263338c-971c-47d4-8d7c-e21e0a1c22cc-serving-cert\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439540 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxnz5\" (UniqueName: \"kubernetes.io/projected/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-kube-api-access-vxnz5\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439569 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-config\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439594 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdef5f7e-6a3f-4221-8b29-0e09630d845b-machine-approver-tls\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439618 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-node-pullsecrets\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439644 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/020261ba-2461-414a-a39f-67c4b23d1d2a-serving-cert\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439672 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dbdh\" (UniqueName: \"kubernetes.io/projected/e3173a91-8514-41ed-9843-c674b3b1fd75-kube-api-access-9dbdh\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439697 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62705bdc-1645-4a2b-b385-e089933f0f9f-trusted-ca\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439694 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b263338c-971c-47d4-8d7c-e21e0a1c22cc-config\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439729 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-stats-auth\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439756 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-oauth-serving-cert\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439784 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439809 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-audit\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439834 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-client-ca\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439860 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62705bdc-1645-4a2b-b385-e089933f0f9f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439887 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439913 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6wr\" (UniqueName: \"kubernetes.io/projected/7a5c64d6-6db8-486b-9d26-0b46adccec09-kube-api-access-4v6wr\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439932 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439939 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-images\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439963 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.439990 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-config\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440015 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1c2901-7d74-433a-a5d2-12627b087bf2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440041 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-dir\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440082 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440111 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440145 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440170 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5c64d6-6db8-486b-9d26-0b46adccec09-serving-cert\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440196 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4tl5\" (UniqueName: \"kubernetes.io/projected/d4687e4b-813c-425f-ac21-cc39b28872dd-kube-api-access-n4tl5\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440252 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-config\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440275 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440296 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-service-ca\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440321 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440345 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-serving-cert\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440371 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjq9t\" (UniqueName: \"kubernetes.io/projected/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-kube-api-access-wjq9t\") pod \"image-pruner-29489760-4chhm\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440394 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62705bdc-1645-4a2b-b385-e089933f0f9f-metrics-tls\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440420 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q84fg\" (UniqueName: \"kubernetes.io/projected/09ef18a7-b302-478d-8d37-1be66e4c6886-kube-api-access-q84fg\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440466 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4687e4b-813c-425f-ac21-cc39b28872dd-service-ca-bundle\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440503 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxwb\" (UniqueName: \"kubernetes.io/projected/c66eb9c1-ad69-4acc-8d3b-82050eee2656-kube-api-access-mzxwb\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440526 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdef5f7e-6a3f-4221-8b29-0e09630d845b-auth-proxy-config\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440549 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-config\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440572 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/020261ba-2461-414a-a39f-67c4b23d1d2a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440601 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440627 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp87r\" (UniqueName: \"kubernetes.io/projected/020261ba-2461-414a-a39f-67c4b23d1d2a-kube-api-access-cp87r\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440649 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-metrics-certs\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440673 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-serving-cert\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440696 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440934 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cj44g"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440937 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3173a91-8514-41ed-9843-c674b3b1fd75-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.441871 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.441923 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.442139 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-etcd-client\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.444025 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-image-import-ca\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.442666 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-policies\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.442960 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-dir\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.443412 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-serviceca\") pod \"image-pruner-29489760-4chhm\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.443557 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.443733 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-config\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.442435 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-audit-policies\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.444187 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-audit-dir\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.444653 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdef5f7e-6a3f-4221-8b29-0e09630d845b-config\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.444951 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.445627 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.440196 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-client-ca\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.446927 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.447517 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/020261ba-2461-414a-a39f-67c4b23d1d2a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.447548 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b263338c-971c-47d4-8d7c-e21e0a1c22cc-serving-cert\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.447661 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-etcd-client\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.449032 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b263338c-971c-47d4-8d7c-e21e0a1c22cc-trusted-ca\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.449727 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.450737 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-serving-cert\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.452629 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xlxqs"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.453049 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-node-pullsecrets\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.453275 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.453560 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.453713 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-client-ca\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.453805 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-audit\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454007 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454136 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-images\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454179 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-config\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454356 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/020261ba-2461-414a-a39f-67c4b23d1d2a-serving-cert\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454398 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-encryption-config\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454398 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454529 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454670 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454728 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdef5f7e-6a3f-4221-8b29-0e09630d845b-machine-approver-tls\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.454808 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-encryption-config\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455011 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d03320a-a15c-401c-8b78-1acddefe4192-serving-cert\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455112 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3173a91-8514-41ed-9843-c674b3b1fd75-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455234 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455297 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455324 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d03320a-a15c-401c-8b78-1acddefe4192-audit-dir\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455289 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-config\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455674 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.455757 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-config\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.456058 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdef5f7e-6a3f-4221-8b29-0e09630d845b-auth-proxy-config\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.458718 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d03320a-a15c-401c-8b78-1acddefe4192-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.458815 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.458879 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.458982 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5c64d6-6db8-486b-9d26-0b46adccec09-serving-cert\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.459089 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-serving-cert\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.459139 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.459391 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.459491 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.460244 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.461634 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xlxqs"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.463046 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.463176 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.464701 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kdtgd"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.465948 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cj44g"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.467540 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.468893 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kqzm4"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.470940 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pcl4f"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.472000 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ng9sq"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.472602 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.473632 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x98gl"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.474734 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lf9d6"] Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.475294 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.480230 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.499388 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.520313 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.539767 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541129 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09ef18a7-b302-478d-8d37-1be66e4c6886-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541258 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be05e353-5f84-4beb-9f70-959589984e32-serving-cert\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541357 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r6dr\" (UniqueName: \"kubernetes.io/projected/be05e353-5f84-4beb-9f70-959589984e32-kube-api-access-7r6dr\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541467 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-default-certificate\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541572 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-oauth-config\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541694 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpk9s\" (UniqueName: \"kubernetes.io/projected/62705bdc-1645-4a2b-b385-e089933f0f9f-kube-api-access-wpk9s\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541798 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-serving-cert\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.541926 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ef18a7-b302-478d-8d37-1be66e4c6886-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.542049 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-config\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.542210 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-service-ca-bundle\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.542341 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-config\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.542452 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62705bdc-1645-4a2b-b385-e089933f0f9f-trusted-ca\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.542551 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-stats-auth\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.542716 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-oauth-serving-cert\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.543598 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62705bdc-1645-4a2b-b385-e089933f0f9f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.543747 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1c2901-7d74-433a-a5d2-12627b087bf2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.543898 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4tl5\" (UniqueName: \"kubernetes.io/projected/d4687e4b-813c-425f-ac21-cc39b28872dd-kube-api-access-n4tl5\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.544151 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-service-ca\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.544298 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62705bdc-1645-4a2b-b385-e089933f0f9f-metrics-tls\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.544395 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q84fg\" (UniqueName: \"kubernetes.io/projected/09ef18a7-b302-478d-8d37-1be66e4c6886-kube-api-access-q84fg\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.544493 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4687e4b-813c-425f-ac21-cc39b28872dd-service-ca-bundle\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.544702 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-metrics-certs\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.544916 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55f74\" (UniqueName: \"kubernetes.io/projected/50f20a9a-ecce-45dd-9377-916c0a0ea723-kube-api-access-55f74\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.545018 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1c2901-7d74-433a-a5d2-12627b087bf2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.545150 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad1c2901-7d74-433a-a5d2-12627b087bf2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.545274 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.543538 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-oauth-serving-cert\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.545456 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-trusted-ca-bundle\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.543239 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-service-ca-bundle\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.545527 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-serving-cert\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.545213 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-config\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.543835 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62705bdc-1645-4a2b-b385-e089933f0f9f-trusted-ca\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.543471 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-config\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.546358 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-service-ca\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.546381 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-default-certificate\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.546477 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be05e353-5f84-4beb-9f70-959589984e32-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.547051 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f20a9a-ecce-45dd-9377-916c0a0ea723-trusted-ca-bundle\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.548013 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be05e353-5f84-4beb-9f70-959589984e32-serving-cert\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.549624 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-metrics-certs\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.549699 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62705bdc-1645-4a2b-b385-e089933f0f9f-metrics-tls\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.550571 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d4687e4b-813c-425f-ac21-cc39b28872dd-stats-auth\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.550632 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50f20a9a-ecce-45dd-9377-916c0a0ea723-console-oauth-config\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.560366 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.566791 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4687e4b-813c-425f-ac21-cc39b28872dd-service-ca-bundle\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.579508 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.599315 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.609111 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad1c2901-7d74-433a-a5d2-12627b087bf2-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.619762 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.640580 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.660294 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.660711 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.661155 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:16:23.663272799 +0000 UTC Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.661561 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.665743 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad1c2901-7d74-433a-a5d2-12627b087bf2-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.699870 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.721352 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.740347 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.759484 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.779753 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.800186 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.819147 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.839008 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.859682 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.879278 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.899489 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.905226 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ef18a7-b302-478d-8d37-1be66e4c6886-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.920950 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.922160 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09ef18a7-b302-478d-8d37-1be66e4c6886-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.940291 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.980151 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 00:09:09 crc kubenswrapper[4697]: I0126 00:09:09.999014 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.020984 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.040399 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.060719 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.081302 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.101061 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.120497 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.140771 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.159323 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.166387 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" event={"ID":"91099126-c16a-4403-8863-26ec02407c06","Type":"ContainerStarted","Data":"6105ebf86d7347432eb4882c9904b23e490a0128a68496e3111cc1f75fe1ee17"} Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.166438 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" event={"ID":"91099126-c16a-4403-8863-26ec02407c06","Type":"ContainerStarted","Data":"ab9e46808d6966aa14ae747f6349d71b6641709aa8a051adb5dd5050ea046736"} Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.179778 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.200178 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.221367 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.239802 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.260053 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.279456 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.300718 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.319408 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.340283 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.360751 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.378892 4697 request.go:700] Waited for 1.003792949s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&limit=500&resourceVersion=0 Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.381744 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.400376 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.419820 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.440525 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.460946 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.480519 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.501224 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.521196 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.539397 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.560548 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.581120 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.601005 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.619497 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.640952 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.660143 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.660203 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.660311 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.661044 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.680577 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.700762 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.724880 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.740136 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.759696 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.780315 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.800374 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.820401 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.840734 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.860236 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.880061 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.900204 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.921117 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.940467 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.960262 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 00:09:10 crc kubenswrapper[4697]: I0126 00:09:10.981732 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.000457 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.020993 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.041625 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.061466 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.081303 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.101318 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.120201 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.168124 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l4vx\" (UniqueName: \"kubernetes.io/projected/1d03320a-a15c-401c-8b78-1acddefe4192-kube-api-access-2l4vx\") pod \"apiserver-7bbb656c7d-m2jbt\" (UID: \"1d03320a-a15c-401c-8b78-1acddefe4192\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.190550 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqdg4\" (UniqueName: \"kubernetes.io/projected/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-kube-api-access-qqdg4\") pod \"route-controller-manager-6576b87f9c-b798x\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.213879 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25brg\" (UniqueName: \"kubernetes.io/projected/cdef5f7e-6a3f-4221-8b29-0e09630d845b-kube-api-access-25brg\") pod \"machine-approver-56656f9798-g55tn\" (UID: \"cdef5f7e-6a3f-4221-8b29-0e09630d845b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.232616 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.238982 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp87r\" (UniqueName: \"kubernetes.io/projected/020261ba-2461-414a-a39f-67c4b23d1d2a-kube-api-access-cp87r\") pod \"openshift-config-operator-7777fb866f-8cg2f\" (UID: \"020261ba-2461-414a-a39f-67c4b23d1d2a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.244905 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqns8\" (UniqueName: \"kubernetes.io/projected/36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4-kube-api-access-xqns8\") pod \"openshift-controller-manager-operator-756b6f6bc6-dgndb\" (UID: \"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.256268 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.270126 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rmq\" (UniqueName: \"kubernetes.io/projected/b263338c-971c-47d4-8d7c-e21e0a1c22cc-kube-api-access-h8rmq\") pod \"console-operator-58897d9998-6wtq9\" (UID: \"b263338c-971c-47d4-8d7c-e21e0a1c22cc\") " pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.275303 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.284958 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjjlx\" (UniqueName: \"kubernetes.io/projected/d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b-kube-api-access-wjjlx\") pod \"apiserver-76f77b778f-dsh82\" (UID: \"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b\") " pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.288887 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.301372 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gjzz\" (UniqueName: \"kubernetes.io/projected/34ce2092-249b-4b00-8e7a-46fa672982f5-kube-api-access-2gjzz\") pod \"downloads-7954f5f757-f2kkh\" (UID: \"34ce2092-249b-4b00-8e7a-46fa672982f5\") " pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.316211 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxnz5\" (UniqueName: \"kubernetes.io/projected/e548e601-d7aa-4a67-9a9b-14dd195fcd9e-kube-api-access-vxnz5\") pod \"machine-api-operator-5694c8668f-xr4t2\" (UID: \"e548e601-d7aa-4a67-9a9b-14dd195fcd9e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.343013 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dbdh\" (UniqueName: \"kubernetes.io/projected/e3173a91-8514-41ed-9843-c674b3b1fd75-kube-api-access-9dbdh\") pod \"openshift-apiserver-operator-796bbdcf4f-jt6jh\" (UID: \"e3173a91-8514-41ed-9843-c674b3b1fd75\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.363147 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6wr\" (UniqueName: \"kubernetes.io/projected/7a5c64d6-6db8-486b-9d26-0b46adccec09-kube-api-access-4v6wr\") pod \"controller-manager-879f6c89f-ccrgz\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.380431 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.385323 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjq9t\" (UniqueName: \"kubernetes.io/projected/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-kube-api-access-wjq9t\") pod \"image-pruner-29489760-4chhm\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.398862 4697 request.go:700] Waited for 1.945114481s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.400751 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.419963 4697 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.436682 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.439659 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.442727 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.455182 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.460852 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.479740 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.489372 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.500437 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.509972 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6wtq9"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.523427 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.524728 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:09:11 crc kubenswrapper[4697]: W0126 00:09:11.529304 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb263338c_971c_47d4_8d7c_e21e0a1c22cc.slice/crio-846eb07a45d12e0576a134c22b96fa8545c10774d06e2db0d7498c426434d834 WatchSource:0}: Error finding container 846eb07a45d12e0576a134c22b96fa8545c10774d06e2db0d7498c426434d834: Status 404 returned error can't find the container with id 846eb07a45d12e0576a134c22b96fa8545c10774d06e2db0d7498c426434d834 Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.537820 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxwb\" (UniqueName: \"kubernetes.io/projected/c66eb9c1-ad69-4acc-8d3b-82050eee2656-kube-api-access-mzxwb\") pod \"oauth-openshift-558db77b4-9dqv8\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.539657 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.541285 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.560282 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.584012 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.584091 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.596340 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.622986 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccrgz"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.624624 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r6dr\" (UniqueName: \"kubernetes.io/projected/be05e353-5f84-4beb-9f70-959589984e32-kube-api-access-7r6dr\") pod \"authentication-operator-69f744f599-sljx9\" (UID: \"be05e353-5f84-4beb-9f70-959589984e32\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.637694 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpk9s\" (UniqueName: \"kubernetes.io/projected/62705bdc-1645-4a2b-b385-e089933f0f9f-kube-api-access-wpk9s\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.661171 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.664605 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62705bdc-1645-4a2b-b385-e089933f0f9f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-bj95q\" (UID: \"62705bdc-1645-4a2b-b385-e089933f0f9f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.680585 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.683658 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.686300 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4tl5\" (UniqueName: \"kubernetes.io/projected/d4687e4b-813c-425f-ac21-cc39b28872dd-kube-api-access-n4tl5\") pod \"router-default-5444994796-cqnrq\" (UID: \"d4687e4b-813c-425f-ac21-cc39b28872dd\") " pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.695764 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad1c2901-7d74-433a-a5d2-12627b087bf2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gh82v\" (UID: \"ad1c2901-7d74-433a-a5d2-12627b087bf2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:11 crc kubenswrapper[4697]: W0126 00:09:11.704867 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36c73b6d_35c4_4dd3_9a03_f95ccb6ac0d4.slice/crio-bd1a2d892ef2dcb97ad3323eac23e5cc60378b3c95aed8cb129d93b294ba8d53 WatchSource:0}: Error finding container bd1a2d892ef2dcb97ad3323eac23e5cc60378b3c95aed8cb129d93b294ba8d53: Status 404 returned error can't find the container with id bd1a2d892ef2dcb97ad3323eac23e5cc60378b3c95aed8cb129d93b294ba8d53 Jan 26 00:09:11 crc kubenswrapper[4697]: W0126 00:09:11.710941 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d03320a_a15c_401c_8b78_1acddefe4192.slice/crio-052679c6c4ca73bd2511a08202a79684a9b63e55f07c386d9ae3522f5a4fdb84 WatchSource:0}: Error finding container 052679c6c4ca73bd2511a08202a79684a9b63e55f07c386d9ae3522f5a4fdb84: Status 404 returned error can't find the container with id 052679c6c4ca73bd2511a08202a79684a9b63e55f07c386d9ae3522f5a4fdb84 Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.732353 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f74\" (UniqueName: \"kubernetes.io/projected/50f20a9a-ecce-45dd-9377-916c0a0ea723-kube-api-access-55f74\") pod \"console-f9d7485db-jwzr6\" (UID: \"50f20a9a-ecce-45dd-9377-916c0a0ea723\") " pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.735821 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q84fg\" (UniqueName: \"kubernetes.io/projected/09ef18a7-b302-478d-8d37-1be66e4c6886-kube-api-access-q84fg\") pod \"kube-storage-version-migrator-operator-b67b599dd-2pkdl\" (UID: \"09ef18a7-b302-478d-8d37-1be66e4c6886\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.740306 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.761312 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.816270 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.838925 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.851749 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.860123 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.865722 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:11 crc kubenswrapper[4697]: W0126 00:09:11.879269 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3173a91_8514_41ed_9843_c674b3b1fd75.slice/crio-c13cf270ceceb3f4a306ce1579de1a1ee938d4d2a28287f747e82bb1458b1b92 WatchSource:0}: Error finding container c13cf270ceceb3f4a306ce1579de1a1ee938d4d2a28287f747e82bb1458b1b92: Status 404 returned error can't find the container with id c13cf270ceceb3f4a306ce1579de1a1ee938d4d2a28287f747e82bb1458b1b92 Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.880198 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.880974 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-ca\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881144 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881176 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881193 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-tls\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881210 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-trusted-ca\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881275 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-bound-sa-token\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7kn\" (UniqueName: \"kubernetes.io/projected/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-kube-api-access-hj7kn\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881325 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqpf7\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-kube-api-access-jqpf7\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881365 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881418 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-service-ca\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881462 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-serving-cert\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881500 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-certificates\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881527 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-config\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.881551 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-client\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: E0126 00:09:11.883345 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.383325735 +0000 UTC m=+94.020103235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.900661 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.910521 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.916926 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.925751 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.931333 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.932957 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.938223 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dsh82"] Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.949661 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.985120 4697 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.989456 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:11 crc kubenswrapper[4697]: E0126 00:09:11.989824 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.489787057 +0000 UTC m=+94.126564447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.989882 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/68aa7031-ccd9-4123-8158-1eec53aafd9a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.989928 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d00cd23-9f79-4b17-9c20-006b33bf7b9e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5n7g4\" (UID: \"1d00cd23-9f79-4b17-9c20-006b33bf7b9e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.989955 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95f47\" (UniqueName: \"kubernetes.io/projected/b052f719-3be0-4fb7-8e99-714c703574bd-kube-api-access-95f47\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990029 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990088 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr74m\" (UniqueName: \"kubernetes.io/projected/68aa7031-ccd9-4123-8158-1eec53aafd9a-kube-api-access-fr74m\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990128 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33ad7749-5e10-4f00-8373-681ba35a6b4f-config-volume\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990158 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fdjb\" (UniqueName: \"kubernetes.io/projected/b00d79f8-1314-4770-b195-c8e425473239-kube-api-access-8fdjb\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990259 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc76051-c9b3-463c-b63a-01635555e888-config\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990373 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8ck\" (UniqueName: \"kubernetes.io/projected/006b7315-1211-44f3-950d-2d9a74c2be04-kube-api-access-jn8ck\") pod \"ingress-canary-xlxqs\" (UID: \"006b7315-1211-44f3-950d-2d9a74c2be04\") " pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990429 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8plb\" (UniqueName: \"kubernetes.io/projected/0524a60c-3f95-47f7-8191-003a1f00995d-kube-api-access-n8plb\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990456 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmcm\" (UniqueName: \"kubernetes.io/projected/dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd-kube-api-access-8gmcm\") pod \"package-server-manager-789f6589d5-f62mg\" (UID: \"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990491 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/423e0d18-f30f-42d7-987a-90bbb521a550-secret-volume\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990536 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-config\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990573 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/814f45b8-36a6-49e8-adda-91191ea0cedc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-52f9l\" (UID: \"814f45b8-36a6-49e8-adda-91191ea0cedc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990596 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-socket-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990617 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84372a82-1d0b-459d-b44c-f8a754c4bf58-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kqzm4\" (UID: \"84372a82-1d0b-459d-b44c-f8a754c4bf58\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990635 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-client\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990675 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b00d79f8-1314-4770-b195-c8e425473239-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990696 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-tmpfs\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990713 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-srv-cert\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990759 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4h8p\" (UniqueName: \"kubernetes.io/projected/33ad7749-5e10-4f00-8373-681ba35a6b4f-kube-api-access-f4h8p\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990808 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990835 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/006b7315-1211-44f3-950d-2d9a74c2be04-cert\") pod \"ingress-canary-xlxqs\" (UID: \"006b7315-1211-44f3-950d-2d9a74c2be04\") " pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990849 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcc6c22d-7726-4a91-85b3-458926ae3613-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990899 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b00d79f8-1314-4770-b195-c8e425473239-srv-cert\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990917 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc265a1a-21de-413a-91c5-f25de2e1f852-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990949 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-mountpoint-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990968 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thw6z\" (UniqueName: \"kubernetes.io/projected/84372a82-1d0b-459d-b44c-f8a754c4bf58-kube-api-access-thw6z\") pod \"multus-admission-controller-857f4d67dd-kqzm4\" (UID: \"84372a82-1d0b-459d-b44c-f8a754c4bf58\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.990991 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-trusted-ca\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991043 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45c0700e-0c45-42eb-a253-c99680b07bd9-metrics-tls\") pod \"dns-operator-744455d44c-cj2ql\" (UID: \"45c0700e-0c45-42eb-a253-c99680b07bd9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991117 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991152 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759c72ce-4566-432c-9768-bfdfa5c6dc45-config\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991175 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwnm\" (UniqueName: \"kubernetes.io/projected/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-kube-api-access-qhwnm\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991208 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7kn\" (UniqueName: \"kubernetes.io/projected/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-kube-api-access-hj7kn\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991225 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcc6c22d-7726-4a91-85b3-458926ae3613-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991255 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzlk7\" (UniqueName: \"kubernetes.io/projected/423e0d18-f30f-42d7-987a-90bbb521a550-kube-api-access-xzlk7\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991300 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759c72ce-4566-432c-9768-bfdfa5c6dc45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991315 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-csi-data-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991330 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc76051-c9b3-463c-b63a-01635555e888-serving-cert\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991399 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/578353c6-8556-40f0-976e-3cde8cdf52c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991423 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hqjj\" (UniqueName: \"kubernetes.io/projected/5bc76051-c9b3-463c-b63a-01635555e888-kube-api-access-9hqjj\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991611 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-service-ca\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991661 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84phd\" (UniqueName: \"kubernetes.io/projected/814f45b8-36a6-49e8-adda-91191ea0cedc-kube-api-access-84phd\") pod \"cluster-samples-operator-665b6dd947-52f9l\" (UID: \"814f45b8-36a6-49e8-adda-91191ea0cedc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.991741 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-serving-cert\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992235 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnd6l\" (UniqueName: \"kubernetes.io/projected/1d00cd23-9f79-4b17-9c20-006b33bf7b9e-kube-api-access-lnd6l\") pod \"control-plane-machine-set-operator-78cbb6b69f-5n7g4\" (UID: \"1d00cd23-9f79-4b17-9c20-006b33bf7b9e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992304 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68aa7031-ccd9-4123-8158-1eec53aafd9a-images\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992326 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc265a1a-21de-413a-91c5-f25de2e1f852-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992346 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-certificates\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992366 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzfw6\" (UniqueName: \"kubernetes.io/projected/e715c38e-1f2a-4fe5-8e19-1a979a02a51a-kube-api-access-wzfw6\") pod \"migrator-59844c95c7-scx59\" (UID: \"e715c38e-1f2a-4fe5-8e19-1a979a02a51a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992395 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/76990632-69a9-4ec8-a37a-a1c267223148-signing-key\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992464 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-profile-collector-cert\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992495 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hqn2\" (UniqueName: \"kubernetes.io/projected/76990632-69a9-4ec8-a37a-a1c267223148-kube-api-access-5hqn2\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992520 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxkq\" (UniqueName: \"kubernetes.io/projected/45c0700e-0c45-42eb-a253-c99680b07bd9-kube-api-access-2wxkq\") pod \"dns-operator-744455d44c-cj2ql\" (UID: \"45c0700e-0c45-42eb-a253-c99680b07bd9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992568 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-ca\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992589 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/423e0d18-f30f-42d7-987a-90bbb521a550-config-volume\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992655 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-registration-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992699 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44sd4\" (UniqueName: \"kubernetes.io/projected/fcc6c22d-7726-4a91-85b3-458926ae3613-kube-api-access-44sd4\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992716 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-plugins-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992742 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c84tx\" (UniqueName: \"kubernetes.io/projected/578353c6-8556-40f0-976e-3cde8cdf52c5-kube-api-access-c84tx\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992758 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33ad7749-5e10-4f00-8373-681ba35a6b4f-metrics-tls\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992787 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc6c22d-7726-4a91-85b3-458926ae3613-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992823 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992861 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992892 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992910 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/759c72ce-4566-432c-9768-bfdfa5c6dc45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992934 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-tls\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992951 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/68aa7031-ccd9-4123-8158-1eec53aafd9a-proxy-tls\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992967 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/578353c6-8556-40f0-976e-3cde8cdf52c5-proxy-tls\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.992985 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0524a60c-3f95-47f7-8191-003a1f00995d-certs\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993017 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8s7\" (UniqueName: \"kubernetes.io/projected/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-kube-api-access-kc8s7\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993096 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-bound-sa-token\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993130 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f62mg\" (UID: \"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993158 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqpf7\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-kube-api-access-jqpf7\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993190 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/76990632-69a9-4ec8-a37a-a1c267223148-signing-cabundle\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993239 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993264 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0524a60c-3f95-47f7-8191-003a1f00995d-node-bootstrap-token\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993287 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc265a1a-21de-413a-91c5-f25de2e1f852-config\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993304 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvv67\" (UniqueName: \"kubernetes.io/projected/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-kube-api-access-wvv67\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.996023 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-trusted-ca\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.996556 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-ca\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.997630 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-certificates\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.993237 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-service-ca\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: I0126 00:09:11.998988 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-etcd-client\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:11 crc kubenswrapper[4697]: E0126 00:09:11.999277 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.49926377 +0000 UTC m=+94.136041160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.002600 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-config\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.003821 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-serving-cert\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.004205 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.013038 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-tls\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.014686 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.039609 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-bound-sa-token\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.053615 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9dqv8"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.058853 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqpf7\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-kube-api-access-jqpf7\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.078463 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29489760-4chhm"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.084756 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7kn\" (UniqueName: \"kubernetes.io/projected/b6158599-2f34-473b-a4c5-aa8a1d9c0f1b-kube-api-access-hj7kn\") pod \"etcd-operator-b45778765-pcl4f\" (UID: \"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096636 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096782 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc6c22d-7726-4a91-85b3-458926ae3613-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096804 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c84tx\" (UniqueName: \"kubernetes.io/projected/578353c6-8556-40f0-976e-3cde8cdf52c5-kube-api-access-c84tx\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096829 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33ad7749-5e10-4f00-8373-681ba35a6b4f-metrics-tls\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096848 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096871 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/578353c6-8556-40f0-976e-3cde8cdf52c5-proxy-tls\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096886 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/759c72ce-4566-432c-9768-bfdfa5c6dc45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096900 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/68aa7031-ccd9-4123-8158-1eec53aafd9a-proxy-tls\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096917 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0524a60c-3f95-47f7-8191-003a1f00995d-certs\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096932 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8s7\" (UniqueName: \"kubernetes.io/projected/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-kube-api-access-kc8s7\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096948 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f62mg\" (UID: \"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096972 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/76990632-69a9-4ec8-a37a-a1c267223148-signing-cabundle\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.096989 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0524a60c-3f95-47f7-8191-003a1f00995d-node-bootstrap-token\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097003 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc265a1a-21de-413a-91c5-f25de2e1f852-config\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097017 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvv67\" (UniqueName: \"kubernetes.io/projected/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-kube-api-access-wvv67\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097032 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/68aa7031-ccd9-4123-8158-1eec53aafd9a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097047 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097080 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d00cd23-9f79-4b17-9c20-006b33bf7b9e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5n7g4\" (UID: \"1d00cd23-9f79-4b17-9c20-006b33bf7b9e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097097 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95f47\" (UniqueName: \"kubernetes.io/projected/b052f719-3be0-4fb7-8e99-714c703574bd-kube-api-access-95f47\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097112 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr74m\" (UniqueName: \"kubernetes.io/projected/68aa7031-ccd9-4123-8158-1eec53aafd9a-kube-api-access-fr74m\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097127 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33ad7749-5e10-4f00-8373-681ba35a6b4f-config-volume\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097143 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fdjb\" (UniqueName: \"kubernetes.io/projected/b00d79f8-1314-4770-b195-c8e425473239-kube-api-access-8fdjb\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097159 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc76051-c9b3-463c-b63a-01635555e888-config\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097174 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn8ck\" (UniqueName: \"kubernetes.io/projected/006b7315-1211-44f3-950d-2d9a74c2be04-kube-api-access-jn8ck\") pod \"ingress-canary-xlxqs\" (UID: \"006b7315-1211-44f3-950d-2d9a74c2be04\") " pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097192 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/423e0d18-f30f-42d7-987a-90bbb521a550-secret-volume\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097207 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8plb\" (UniqueName: \"kubernetes.io/projected/0524a60c-3f95-47f7-8191-003a1f00995d-kube-api-access-n8plb\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097229 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gmcm\" (UniqueName: \"kubernetes.io/projected/dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd-kube-api-access-8gmcm\") pod \"package-server-manager-789f6589d5-f62mg\" (UID: \"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097249 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84372a82-1d0b-459d-b44c-f8a754c4bf58-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kqzm4\" (UID: \"84372a82-1d0b-459d-b44c-f8a754c4bf58\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097265 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/814f45b8-36a6-49e8-adda-91191ea0cedc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-52f9l\" (UID: \"814f45b8-36a6-49e8-adda-91191ea0cedc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097280 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-socket-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097299 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b00d79f8-1314-4770-b195-c8e425473239-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097313 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-tmpfs\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097326 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-srv-cert\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097342 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4h8p\" (UniqueName: \"kubernetes.io/projected/33ad7749-5e10-4f00-8373-681ba35a6b4f-kube-api-access-f4h8p\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097359 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097372 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/006b7315-1211-44f3-950d-2d9a74c2be04-cert\") pod \"ingress-canary-xlxqs\" (UID: \"006b7315-1211-44f3-950d-2d9a74c2be04\") " pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097387 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcc6c22d-7726-4a91-85b3-458926ae3613-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097435 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b00d79f8-1314-4770-b195-c8e425473239-srv-cert\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097451 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc265a1a-21de-413a-91c5-f25de2e1f852-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097467 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thw6z\" (UniqueName: \"kubernetes.io/projected/84372a82-1d0b-459d-b44c-f8a754c4bf58-kube-api-access-thw6z\") pod \"multus-admission-controller-857f4d67dd-kqzm4\" (UID: \"84372a82-1d0b-459d-b44c-f8a754c4bf58\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097482 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-mountpoint-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097498 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45c0700e-0c45-42eb-a253-c99680b07bd9-metrics-tls\") pod \"dns-operator-744455d44c-cj2ql\" (UID: \"45c0700e-0c45-42eb-a253-c99680b07bd9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097512 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097528 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759c72ce-4566-432c-9768-bfdfa5c6dc45-config\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097544 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwnm\" (UniqueName: \"kubernetes.io/projected/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-kube-api-access-qhwnm\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097558 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcc6c22d-7726-4a91-85b3-458926ae3613-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097581 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzlk7\" (UniqueName: \"kubernetes.io/projected/423e0d18-f30f-42d7-987a-90bbb521a550-kube-api-access-xzlk7\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097603 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759c72ce-4566-432c-9768-bfdfa5c6dc45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097621 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-csi-data-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097637 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc76051-c9b3-463c-b63a-01635555e888-serving-cert\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097653 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/578353c6-8556-40f0-976e-3cde8cdf52c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097667 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hqjj\" (UniqueName: \"kubernetes.io/projected/5bc76051-c9b3-463c-b63a-01635555e888-kube-api-access-9hqjj\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097697 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84phd\" (UniqueName: \"kubernetes.io/projected/814f45b8-36a6-49e8-adda-91191ea0cedc-kube-api-access-84phd\") pod \"cluster-samples-operator-665b6dd947-52f9l\" (UID: \"814f45b8-36a6-49e8-adda-91191ea0cedc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097715 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnd6l\" (UniqueName: \"kubernetes.io/projected/1d00cd23-9f79-4b17-9c20-006b33bf7b9e-kube-api-access-lnd6l\") pod \"control-plane-machine-set-operator-78cbb6b69f-5n7g4\" (UID: \"1d00cd23-9f79-4b17-9c20-006b33bf7b9e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097732 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68aa7031-ccd9-4123-8158-1eec53aafd9a-images\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097748 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc265a1a-21de-413a-91c5-f25de2e1f852-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097765 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzfw6\" (UniqueName: \"kubernetes.io/projected/e715c38e-1f2a-4fe5-8e19-1a979a02a51a-kube-api-access-wzfw6\") pod \"migrator-59844c95c7-scx59\" (UID: \"e715c38e-1f2a-4fe5-8e19-1a979a02a51a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097787 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/76990632-69a9-4ec8-a37a-a1c267223148-signing-key\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097803 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-profile-collector-cert\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097825 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hqn2\" (UniqueName: \"kubernetes.io/projected/76990632-69a9-4ec8-a37a-a1c267223148-kube-api-access-5hqn2\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097840 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wxkq\" (UniqueName: \"kubernetes.io/projected/45c0700e-0c45-42eb-a253-c99680b07bd9-kube-api-access-2wxkq\") pod \"dns-operator-744455d44c-cj2ql\" (UID: \"45c0700e-0c45-42eb-a253-c99680b07bd9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097865 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/423e0d18-f30f-42d7-987a-90bbb521a550-config-volume\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097882 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-registration-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097898 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44sd4\" (UniqueName: \"kubernetes.io/projected/fcc6c22d-7726-4a91-85b3-458926ae3613-kube-api-access-44sd4\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.097912 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-plugins-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.098170 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-plugins-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.103173 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.603135104 +0000 UTC m=+94.239912494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.104693 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/76990632-69a9-4ec8-a37a-a1c267223148-signing-cabundle\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.105952 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/578353c6-8556-40f0-976e-3cde8cdf52c5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.106035 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f2kkh"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.106581 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-mountpoint-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.107322 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc265a1a-21de-413a-91c5-f25de2e1f852-config\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.108215 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/68aa7031-ccd9-4123-8158-1eec53aafd9a-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.109273 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.110834 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33ad7749-5e10-4f00-8373-681ba35a6b4f-config-volume\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.111055 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0524a60c-3f95-47f7-8191-003a1f00995d-node-bootstrap-token\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.111273 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33ad7749-5e10-4f00-8373-681ba35a6b4f-metrics-tls\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.112343 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-socket-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.112816 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bc76051-c9b3-463c-b63a-01635555e888-config\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.113237 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.113462 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b00d79f8-1314-4770-b195-c8e425473239-srv-cert\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.113587 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68aa7031-ccd9-4123-8158-1eec53aafd9a-images\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.113687 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fcc6c22d-7726-4a91-85b3-458926ae3613-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.114443 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/68aa7031-ccd9-4123-8158-1eec53aafd9a-proxy-tls\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.114572 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/578353c6-8556-40f0-976e-3cde8cdf52c5-proxy-tls\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.114743 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/759c72ce-4566-432c-9768-bfdfa5c6dc45-config\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.115848 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcc6c22d-7726-4a91-85b3-458926ae3613-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.116121 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-csi-data-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.116296 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/423e0d18-f30f-42d7-987a-90bbb521a550-secret-volume\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.117930 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45c0700e-0c45-42eb-a253-c99680b07bd9-metrics-tls\") pod \"dns-operator-744455d44c-cj2ql\" (UID: \"45c0700e-0c45-42eb-a253-c99680b07bd9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.118450 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-tmpfs\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.118518 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/423e0d18-f30f-42d7-987a-90bbb521a550-config-volume\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.118778 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b052f719-3be0-4fb7-8e99-714c703574bd-registration-dir\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.125759 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/759c72ce-4566-432c-9768-bfdfa5c6dc45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.126421 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/814f45b8-36a6-49e8-adda-91191ea0cedc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-52f9l\" (UID: \"814f45b8-36a6-49e8-adda-91191ea0cedc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.127784 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/76990632-69a9-4ec8-a37a-a1c267223148-signing-key\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.128352 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-profile-collector-cert\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.128954 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0524a60c-3f95-47f7-8191-003a1f00995d-certs\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.129615 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d00cd23-9f79-4b17-9c20-006b33bf7b9e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5n7g4\" (UID: \"1d00cd23-9f79-4b17-9c20-006b33bf7b9e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.133136 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bc76051-c9b3-463c-b63a-01635555e888-serving-cert\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.137793 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8s7\" (UniqueName: \"kubernetes.io/projected/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-kube-api-access-kc8s7\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.138842 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.141527 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc265a1a-21de-413a-91c5-f25de2e1f852-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.141948 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-f62mg\" (UID: \"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.147186 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3979fd71-42bb-4ff6-a978-1b5d86c3a1e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bwtmw\" (UID: \"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.148025 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-srv-cert\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.148583 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b00d79f8-1314-4770-b195-c8e425473239-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.149059 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/006b7315-1211-44f3-950d-2d9a74c2be04-cert\") pod \"ingress-canary-xlxqs\" (UID: \"006b7315-1211-44f3-950d-2d9a74c2be04\") " pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.152573 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84372a82-1d0b-459d-b44c-f8a754c4bf58-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-kqzm4\" (UID: \"84372a82-1d0b-459d-b44c-f8a754c4bf58\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.163351 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcc6c22d-7726-4a91-85b3-458926ae3613-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.164458 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xr4t2"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.181523 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn8ck\" (UniqueName: \"kubernetes.io/projected/006b7315-1211-44f3-950d-2d9a74c2be04-kube-api-access-jn8ck\") pod \"ingress-canary-xlxqs\" (UID: \"006b7315-1211-44f3-950d-2d9a74c2be04\") " pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.190557 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-4chhm" event={"ID":"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa","Type":"ContainerStarted","Data":"963548a00499ea9daf4cea57ae207876662fced667f4c8a7601cbb8956fe7303"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.202137 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.202460 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.702447292 +0000 UTC m=+94.339224682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.217864 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc265a1a-21de-413a-91c5-f25de2e1f852-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kd9d6\" (UID: \"fc265a1a-21de-413a-91c5-f25de2e1f852\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.219549 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f2kkh" event={"ID":"34ce2092-249b-4b00-8e7a-46fa672982f5","Type":"ContainerStarted","Data":"9260b921c58536698ef7f03620ae69ab21cd3e44dca5105ab015704550721142"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.243873 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thw6z\" (UniqueName: \"kubernetes.io/projected/84372a82-1d0b-459d-b44c-f8a754c4bf58-kube-api-access-thw6z\") pod \"multus-admission-controller-857f4d67dd-kqzm4\" (UID: \"84372a82-1d0b-459d-b44c-f8a754c4bf58\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.246406 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.258320 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" event={"ID":"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b","Type":"ContainerStarted","Data":"80530e6f46fef019fbee9c3c5068c7c24e81034e59e80af5f08bc2431c355114"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.259194 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c84tx\" (UniqueName: \"kubernetes.io/projected/578353c6-8556-40f0-976e-3cde8cdf52c5-kube-api-access-c84tx\") pod \"machine-config-controller-84d6567774-brqbv\" (UID: \"578353c6-8556-40f0-976e-3cde8cdf52c5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.274911 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/759c72ce-4566-432c-9768-bfdfa5c6dc45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p7cg8\" (UID: \"759c72ce-4566-432c-9768-bfdfa5c6dc45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.278246 4697 generic.go:334] "Generic (PLEG): container finished" podID="1d03320a-a15c-401c-8b78-1acddefe4192" containerID="87ac51ce4b5242fe5d2ba49953675f4994666597d1285ac96815783e243d4061" exitCode=0 Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.278316 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" event={"ID":"1d03320a-a15c-401c-8b78-1acddefe4192","Type":"ContainerDied","Data":"87ac51ce4b5242fe5d2ba49953675f4994666597d1285ac96815783e243d4061"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.278348 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" event={"ID":"1d03320a-a15c-401c-8b78-1acddefe4192","Type":"ContainerStarted","Data":"052679c6c4ca73bd2511a08202a79684a9b63e55f07c386d9ae3522f5a4fdb84"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.289853 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvv67\" (UniqueName: \"kubernetes.io/projected/15fbfbf7-36fe-45c2-b722-4bb67e28d89f-kube-api-access-wvv67\") pod \"catalog-operator-68c6474976-pjgvp\" (UID: \"15fbfbf7-36fe-45c2-b722-4bb67e28d89f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.298159 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.302414 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.302701 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.302784 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.80276725 +0000 UTC m=+94.439544640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.302711 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95f47\" (UniqueName: \"kubernetes.io/projected/b052f719-3be0-4fb7-8e99-714c703574bd-kube-api-access-95f47\") pod \"csi-hostpathplugin-cj44g\" (UID: \"b052f719-3be0-4fb7-8e99-714c703574bd\") " pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.303102 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.303406 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.803394609 +0000 UTC m=+94.440171999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.305952 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" event={"ID":"e3173a91-8514-41ed-9843-c674b3b1fd75","Type":"ContainerStarted","Data":"d75e4a76b0ed854c94011b0c2020c584787569909ce7d9fdb0c6e3116ce8e809"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.305995 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" event={"ID":"e3173a91-8514-41ed-9843-c674b3b1fd75","Type":"ContainerStarted","Data":"c13cf270ceceb3f4a306ce1579de1a1ee938d4d2a28287f747e82bb1458b1b92"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.319919 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" event={"ID":"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4","Type":"ContainerStarted","Data":"db0c654db6f5649d25fd8c45c766cf0c90cc2b27986fc7bdc4b1cd196e796ea2"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.320241 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" event={"ID":"36c73b6d-35c4-4dd3-9a03-f95ccb6ac0d4","Type":"ContainerStarted","Data":"bd1a2d892ef2dcb97ad3323eac23e5cc60378b3c95aed8cb129d93b294ba8d53"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.324445 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr74m\" (UniqueName: \"kubernetes.io/projected/68aa7031-ccd9-4123-8158-1eec53aafd9a-kube-api-access-fr74m\") pod \"machine-config-operator-74547568cd-dfc9g\" (UID: \"68aa7031-ccd9-4123-8158-1eec53aafd9a\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.325002 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-sljx9"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.330329 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.350816 4697 generic.go:334] "Generic (PLEG): container finished" podID="020261ba-2461-414a-a39f-67c4b23d1d2a" containerID="722c01ec01b37d18fe5ec1d35e007a58932d6b858f12447a914a2324e31930ea" exitCode=0 Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.350949 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" event={"ID":"020261ba-2461-414a-a39f-67c4b23d1d2a","Type":"ContainerDied","Data":"722c01ec01b37d18fe5ec1d35e007a58932d6b858f12447a914a2324e31930ea"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.350986 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" event={"ID":"020261ba-2461-414a-a39f-67c4b23d1d2a","Type":"ContainerStarted","Data":"86b98b4c161139197d1592a2ebd0c1448492226feef7effc1ed559c0c490dbbe"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.382261 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.385559 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.386661 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hqjj\" (UniqueName: \"kubernetes.io/projected/5bc76051-c9b3-463c-b63a-01635555e888-kube-api-access-9hqjj\") pod \"service-ca-operator-777779d784-x98gl\" (UID: \"5bc76051-c9b3-463c-b63a-01635555e888\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.387491 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fdjb\" (UniqueName: \"kubernetes.io/projected/b00d79f8-1314-4770-b195-c8e425473239-kube-api-access-8fdjb\") pod \"olm-operator-6b444d44fb-qsdk7\" (UID: \"b00d79f8-1314-4770-b195-c8e425473239\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.389794 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6wtq9" event={"ID":"b263338c-971c-47d4-8d7c-e21e0a1c22cc","Type":"ContainerStarted","Data":"69f447bd86de91a4a2838b3769abad6617abdd9753e623512f3702dbf0cec654"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.389843 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6wtq9" event={"ID":"b263338c-971c-47d4-8d7c-e21e0a1c22cc","Type":"ContainerStarted","Data":"846eb07a45d12e0576a134c22b96fa8545c10774d06e2db0d7498c426434d834"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.390904 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.399563 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cqnrq" event={"ID":"d4687e4b-813c-425f-ac21-cc39b28872dd","Type":"ContainerStarted","Data":"942c29c8e9ec227c5d61415598f770cb7816bdaae62461f7ea054a1e872f0f88"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.403675 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.407355 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.411564 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.411964 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.411998 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.911963464 +0000 UTC m=+94.548740844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.412128 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.412680 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:12.912668665 +0000 UTC m=+94.549446055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.422537 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" event={"ID":"7a5c64d6-6db8-486b-9d26-0b46adccec09","Type":"ContainerStarted","Data":"e9e524060ad1861dabb4e85495424797ba26d57822e4e580aed50d0ba2b3b989"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.423387 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" event={"ID":"7a5c64d6-6db8-486b-9d26-0b46adccec09","Type":"ContainerStarted","Data":"32442d84c73e37df4987257c413ad9db75b4ce87bfc5584b379c8ba2f80ea975"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.423884 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.424518 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jwzr6"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.426788 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.436951 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzfw6\" (UniqueName: \"kubernetes.io/projected/e715c38e-1f2a-4fe5-8e19-1a979a02a51a-kube-api-access-wzfw6\") pod \"migrator-59844c95c7-scx59\" (UID: \"e715c38e-1f2a-4fe5-8e19-1a979a02a51a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.437517 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" event={"ID":"c66eb9c1-ad69-4acc-8d3b-82050eee2656","Type":"ContainerStarted","Data":"20e5308dfd1d5f4d4234a6492e308bfb32b098ea32a1b07f6b465107edd0a8c7"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.439525 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.461995 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" event={"ID":"cdef5f7e-6a3f-4221-8b29-0e09630d845b","Type":"ContainerStarted","Data":"e41abea453e019c12a4a92e7858fba9b0ee2baaaf95bcfae4caf68c4f0170d8a"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.462308 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" event={"ID":"cdef5f7e-6a3f-4221-8b29-0e09630d845b","Type":"ContainerStarted","Data":"e16b232933c1550024f5c84ffdd61188b8834506a84afd20d317e70b54502d64"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.462323 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" event={"ID":"cdef5f7e-6a3f-4221-8b29-0e09630d845b","Type":"ContainerStarted","Data":"81b72b0942076911b0be31482919e48f9963a44cf2d1a628d08b63fc1c4062cb"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.462384 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84phd\" (UniqueName: \"kubernetes.io/projected/814f45b8-36a6-49e8-adda-91191ea0cedc-kube-api-access-84phd\") pod \"cluster-samples-operator-665b6dd947-52f9l\" (UID: \"814f45b8-36a6-49e8-adda-91191ea0cedc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.462831 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnd6l\" (UniqueName: \"kubernetes.io/projected/1d00cd23-9f79-4b17-9c20-006b33bf7b9e-kube-api-access-lnd6l\") pod \"control-plane-machine-set-operator-78cbb6b69f-5n7g4\" (UID: \"1d00cd23-9f79-4b17-9c20-006b33bf7b9e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.465033 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hqn2\" (UniqueName: \"kubernetes.io/projected/76990632-69a9-4ec8-a37a-a1c267223148-kube-api-access-5hqn2\") pod \"service-ca-9c57cc56f-kdtgd\" (UID: \"76990632-69a9-4ec8-a37a-a1c267223148\") " pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.467313 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.475287 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xlxqs" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.483316 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwnm\" (UniqueName: \"kubernetes.io/projected/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-kube-api-access-qhwnm\") pod \"marketplace-operator-79b997595-9f8xv\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.483509 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" event={"ID":"0ed3e7cf-192c-47e4-8e75-9d89cda7c136","Type":"ContainerStarted","Data":"8cd09b04cf369fb266fb8e718df75806b12cfb838b04c4fa404259e2882110ce"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.483593 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" event={"ID":"0ed3e7cf-192c-47e4-8e75-9d89cda7c136","Type":"ContainerStarted","Data":"ecd465b0c0a8493f35248c44f3624e276a918509b49f4af97b11529ca4e5866c"} Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.483716 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzlk7\" (UniqueName: \"kubernetes.io/projected/423e0d18-f30f-42d7-987a-90bbb521a550-kube-api-access-xzlk7\") pod \"collect-profiles-29489760-2ms5w\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.483774 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.504822 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.505294 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8plb\" (UniqueName: \"kubernetes.io/projected/0524a60c-3f95-47f7-8191-003a1f00995d-kube-api-access-n8plb\") pod \"machine-config-server-lf9d6\" (UID: \"0524a60c-3f95-47f7-8191-003a1f00995d\") " pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.514009 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.514324 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.014302132 +0000 UTC m=+94.651079522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.523829 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.526473 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.026457156 +0000 UTC m=+94.663234626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.528046 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wxkq\" (UniqueName: \"kubernetes.io/projected/45c0700e-0c45-42eb-a253-c99680b07bd9-kube-api-access-2wxkq\") pod \"dns-operator-744455d44c-cj2ql\" (UID: \"45c0700e-0c45-42eb-a253-c99680b07bd9\") " pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.567300 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.570402 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gmcm\" (UniqueName: \"kubernetes.io/projected/dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd-kube-api-access-8gmcm\") pod \"package-server-manager-789f6589d5-f62mg\" (UID: \"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.583147 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.586412 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.592966 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44sd4\" (UniqueName: \"kubernetes.io/projected/fcc6c22d-7726-4a91-85b3-458926ae3613-kube-api-access-44sd4\") pod \"cluster-image-registry-operator-dc59b4c8b-trthv\" (UID: \"fcc6c22d-7726-4a91-85b3-458926ae3613\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.606525 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4h8p\" (UniqueName: \"kubernetes.io/projected/33ad7749-5e10-4f00-8373-681ba35a6b4f-kube-api-access-f4h8p\") pod \"dns-default-ng9sq\" (UID: \"33ad7749-5e10-4f00-8373-681ba35a6b4f\") " pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.607996 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.615426 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.622417 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.625033 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.626298 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.126279449 +0000 UTC m=+94.763056839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.643994 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6wtq9" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.645379 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.645682 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.684350 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.695878 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.702731 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pcl4f"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.723714 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.723933 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.728630 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.733708 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.734088 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.23405944 +0000 UTC m=+94.870836830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.784127 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lf9d6" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.810226 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6"] Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.834959 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.835283 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.335267015 +0000 UTC m=+94.972044405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.859997 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.880675 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" podStartSLOduration=71.880658062 podStartE2EDuration="1m11.880658062s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:12.879174687 +0000 UTC m=+94.515952087" watchObservedRunningTime="2026-01-26 00:09:12.880658062 +0000 UTC m=+94.517435452" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.926517 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dgndb" podStartSLOduration=71.926502572 podStartE2EDuration="1m11.926502572s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:12.924586874 +0000 UTC m=+94.561364264" watchObservedRunningTime="2026-01-26 00:09:12.926502572 +0000 UTC m=+94.563279962" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.929220 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.939329 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:12 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:12 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:12 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.939384 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:12 crc kubenswrapper[4697]: I0126 00:09:12.942369 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:12 crc kubenswrapper[4697]: E0126 00:09:12.942849 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.44283476 +0000 UTC m=+95.079612150 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.029052 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-kqzm4"] Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.046829 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.047096 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.547061215 +0000 UTC m=+95.183838605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.047189 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.047473 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.547467247 +0000 UTC m=+95.184244637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.153919 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.156706 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.65665352 +0000 UTC m=+95.293430910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.156906 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.157349 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.65733472 +0000 UTC m=+95.294112110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.190334 4697 csr.go:261] certificate signing request csr-9k5kv is approved, waiting to be issued Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.197832 4697 csr.go:257] certificate signing request csr-9k5kv is issued Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.232582 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-cqnrq" podStartSLOduration=72.232562199 podStartE2EDuration="1m12.232562199s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:13.230292561 +0000 UTC m=+94.867069951" watchObservedRunningTime="2026-01-26 00:09:13.232562199 +0000 UTC m=+94.869339609" Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.262345 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.262746 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.76273186 +0000 UTC m=+95.399509250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.290621 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw"] Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.365118 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.365618 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.865606995 +0000 UTC m=+95.502384385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.396846 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x98gl"] Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.453219 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cj44g"] Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.467262 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.467769 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:13.967751208 +0000 UTC m=+95.604528608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.591787 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-g55tn" podStartSLOduration=73.591771214 podStartE2EDuration="1m13.591771214s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:13.490001763 +0000 UTC m=+95.126779163" watchObservedRunningTime="2026-01-26 00:09:13.591771214 +0000 UTC m=+95.228548604" Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.592515 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.592780 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.092768744 +0000 UTC m=+95.729546124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.595317 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xlxqs"] Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.635721 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv"] Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.641935 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f2kkh" event={"ID":"34ce2092-249b-4b00-8e7a-46fa672982f5","Type":"ContainerStarted","Data":"e53681783f46fa456f555a133885373766449c96e45bcd4aca4b399b47f8a589"} Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.642913 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.643876 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lf9d6" event={"ID":"0524a60c-3f95-47f7-8191-003a1f00995d","Type":"ContainerStarted","Data":"d1c787c0273810daa154266018fb6071352267ffe5f20910e2abf025fdf744bb"} Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.664361 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.664405 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.668220 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-cqnrq" event={"ID":"d4687e4b-813c-425f-ac21-cc39b28872dd","Type":"ContainerStarted","Data":"91ff9d4858765d49d443437429cc9e1b7474ca4b764da7518c95b889ca5d47a5"} Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.674505 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" event={"ID":"84372a82-1d0b-459d-b44c-f8a754c4bf58","Type":"ContainerStarted","Data":"4e099fc47b50a7f4bcb6dd835dd75a0384cbabc121fe46dfb1e0e14ebaede054"} Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.693540 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.693945 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.193929727 +0000 UTC m=+95.830707117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.749105 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.750245 4697 generic.go:334] "Generic (PLEG): container finished" podID="d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b" containerID="e6df1bf378be6ef45034e1b1a6a299ffa746a3dc1258212f24a83bf4752a2053" exitCode=0 Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.750296 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" event={"ID":"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b","Type":"ContainerDied","Data":"e6df1bf378be6ef45034e1b1a6a299ffa746a3dc1258212f24a83bf4752a2053"} Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.787935 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" event={"ID":"be05e353-5f84-4beb-9f70-959589984e32","Type":"ContainerStarted","Data":"08234b6e772e25ddcb0af3ac810a397a6d12099e81f8770f53e3e730ba167003"} Jan 26 00:09:13 crc kubenswrapper[4697]: W0126 00:09:13.788044 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod006b7315_1211_44f3_950d_2d9a74c2be04.slice/crio-c5cecdb25b8f5c823a4ea2dedc8bbe8d470708b88060a9a6089aa00ef382d4d7 WatchSource:0}: Error finding container c5cecdb25b8f5c823a4ea2dedc8bbe8d470708b88060a9a6089aa00ef382d4d7: Status 404 returned error can't find the container with id c5cecdb25b8f5c823a4ea2dedc8bbe8d470708b88060a9a6089aa00ef382d4d7 Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.799431 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.805281 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.305264765 +0000 UTC m=+95.942042155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.811257 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" event={"ID":"fc265a1a-21de-413a-91c5-f25de2e1f852","Type":"ContainerStarted","Data":"91289b474f3855d54feee6ae2ecdfa17e1da50405bbaddace7fe00751813be2e"} Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.938632 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp"] Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.939428 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:13 crc kubenswrapper[4697]: E0126 00:09:13.939971 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.4399526 +0000 UTC m=+96.076730020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:13 crc kubenswrapper[4697]: I0126 00:09:13.996024 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" event={"ID":"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2","Type":"ContainerStarted","Data":"7f737b6b9e572659df80b08983c1c9ff6c16b5e7d97f0ef13fe3db2122cc1e2d"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.026258 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=3.026240339 podStartE2EDuration="3.026240339s" podCreationTimestamp="2026-01-26 00:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:14.025411154 +0000 UTC m=+95.662188544" watchObservedRunningTime="2026-01-26 00:09:14.026240339 +0000 UTC m=+95.663017729" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.040154 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" event={"ID":"c66eb9c1-ad69-4acc-8d3b-82050eee2656","Type":"ContainerStarted","Data":"8958957ef01cd545876a77343a5a6193876d5be3180b9821a963d9f70e169870"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.041414 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.043478 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.043892 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.543881356 +0000 UTC m=+96.180658746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.103933 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:14 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:14 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:14 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.104158 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.107663 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" event={"ID":"ad1c2901-7d74-433a-a5d2-12627b087bf2","Type":"ContainerStarted","Data":"bf1e557d42a167b5d8023538d85b222855f883d288dc10f7382a0e189a1cb271"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.108645 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" event={"ID":"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b","Type":"ContainerStarted","Data":"6441aea9a25a6daf4955c811e3bbff9c0640e0ae211d54e735bd47cea841578b"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.108912 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.162272 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.164682 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.664660656 +0000 UTC m=+96.301438046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.165313 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-4chhm" event={"ID":"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa","Type":"ContainerStarted","Data":"94ec5d01946e2f05833faeb3dd3c73ca6048ce650bbc0d2a8cc187f1ea7f09bd"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.167209 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xh858" podStartSLOduration=73.167191621 podStartE2EDuration="1m13.167191621s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:14.165976815 +0000 UTC m=+95.802754205" watchObservedRunningTime="2026-01-26 00:09:14.167191621 +0000 UTC m=+95.803969011" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.168191 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.168601 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.668587373 +0000 UTC m=+96.305364763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: W0126 00:09:14.193440 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15fbfbf7_36fe_45c2_b722_4bb67e28d89f.slice/crio-2e4b43ec926df83b52178d32b17a93e4ffc9a2e8d611de8eada7a7cc39af8c0e WatchSource:0}: Error finding container 2e4b43ec926df83b52178d32b17a93e4ffc9a2e8d611de8eada7a7cc39af8c0e: Status 404 returned error can't find the container with id 2e4b43ec926df83b52178d32b17a93e4ffc9a2e8d611de8eada7a7cc39af8c0e Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.198511 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" event={"ID":"62705bdc-1645-4a2b-b385-e089933f0f9f","Type":"ContainerStarted","Data":"a4a1bf62fe204e0c191f739c332cbb39085d4d1980de9584748b79131bffd0fc"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.198543 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" event={"ID":"62705bdc-1645-4a2b-b385-e089933f0f9f","Type":"ContainerStarted","Data":"3fdb2ec6dd857b379411ccf77bb2c9834881aca0665968072021636f1b563c82"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.199302 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 00:04:13 +0000 UTC, rotation deadline is 2026-11-23 12:56:15.35595948 +0000 UTC Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.199362 4697 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7236h47m1.156600317s for next certificate rotation Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.212779 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.226403 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" podStartSLOduration=73.226388201 podStartE2EDuration="1m13.226388201s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:14.225049701 +0000 UTC m=+95.861827091" watchObservedRunningTime="2026-01-26 00:09:14.226388201 +0000 UTC m=+95.863165591" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.276765 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.277385 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.777351874 +0000 UTC m=+96.414129274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.294999 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.312513 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.383401 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.383719 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.883704262 +0000 UTC m=+96.520481642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.396547 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jwzr6" event={"ID":"50f20a9a-ecce-45dd-9377-916c0a0ea723","Type":"ContainerStarted","Data":"4deba8efc32c739bdc4acd6997b1438c4441de540b7dd9fcaf9c03fa9accb4a1"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.404599 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" event={"ID":"09ef18a7-b302-478d-8d37-1be66e4c6886","Type":"ContainerStarted","Data":"ca81fcdfcd1dd2f388bd661b73e4b422aa52e711e9ee4cf066bfa8a4afe77f27"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.421573 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" event={"ID":"e548e601-d7aa-4a67-9a9b-14dd195fcd9e","Type":"ContainerStarted","Data":"b21296e67e41bc1854ffc09907ced31decaa6858442e99ab571f933c36e57105"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.421615 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" event={"ID":"e548e601-d7aa-4a67-9a9b-14dd195fcd9e","Type":"ContainerStarted","Data":"8859b441be95090bf193807a9ca8e7ff98b39e7ef9e3065207b0d27ff522ee4e"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.445898 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" event={"ID":"020261ba-2461-414a-a39f-67c4b23d1d2a","Type":"ContainerStarted","Data":"0575160c553a301b80ffaa269bade144b007131e1f7b48edb9c3f5d90a7aa116"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.446648 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.484127 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.484193 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.984180515 +0000 UTC m=+96.620957905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.485011 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.494710 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:14.994696519 +0000 UTC m=+96.631473909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.544111 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" event={"ID":"1d03320a-a15c-401c-8b78-1acddefe4192","Type":"ContainerStarted","Data":"1672a7275beb934875469f6832041f2dbb5847079200ee39c6c98b6d31018c51"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.559861 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jt6jh" podStartSLOduration=74.559841255 podStartE2EDuration="1m14.559841255s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:14.54026204 +0000 UTC m=+96.177039430" watchObservedRunningTime="2026-01-26 00:09:14.559841255 +0000 UTC m=+96.196618645" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.568504 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" event={"ID":"759c72ce-4566-432c-9768-bfdfa5c6dc45","Type":"ContainerStarted","Data":"460bde0a8035524cdf7aefbcde373266a1e7915eeefaa9c5ea08e934e9a8831f"} Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.586085 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.586216 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.086195783 +0000 UTC m=+96.722973173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.586950 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.590675 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.090663617 +0000 UTC m=+96.727441007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: W0126 00:09:14.658231 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode715c38e_1f2a_4fe5_8e19_1a979a02a51a.slice/crio-aa61239d604cd6964cb458eae949c2b62bc1cd26f184740bbe6c46f5436b702e WatchSource:0}: Error finding container aa61239d604cd6964cb458eae949c2b62bc1cd26f184740bbe6c46f5436b702e: Status 404 returned error can't find the container with id aa61239d604cd6964cb458eae949c2b62bc1cd26f184740bbe6c46f5436b702e Jan 26 00:09:14 crc kubenswrapper[4697]: W0126 00:09:14.663943 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb00d79f8_1314_4770_b195_c8e425473239.slice/crio-37aa1b1ebf1fea77b5c3080f470fb19e9d620553d7ed2101a497c35b2fc171d1 WatchSource:0}: Error finding container 37aa1b1ebf1fea77b5c3080f470fb19e9d620553d7ed2101a497c35b2fc171d1: Status 404 returned error can't find the container with id 37aa1b1ebf1fea77b5c3080f470fb19e9d620553d7ed2101a497c35b2fc171d1 Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.688093 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.688492 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.18847604 +0000 UTC m=+96.825253430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.702969 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6wtq9" podStartSLOduration=73.702946532 podStartE2EDuration="1m13.702946532s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:14.682099899 +0000 UTC m=+96.318877289" watchObservedRunningTime="2026-01-26 00:09:14.702946532 +0000 UTC m=+96.339723922" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.705544 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.766124 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.793453 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kdtgd"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.796047 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.799186 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.299169848 +0000 UTC m=+96.935947238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.854551 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.899192 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:14 crc kubenswrapper[4697]: E0126 00:09:14.899499 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.399484636 +0000 UTC m=+97.036262026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.929755 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:14 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:14 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:14 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.929803 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.939001 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cj2ql"] Jan 26 00:09:14 crc kubenswrapper[4697]: I0126 00:09:14.998488 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv"] Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.000443 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.000741 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.500729662 +0000 UTC m=+97.137507052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.119505 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.120029 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.620012117 +0000 UTC m=+97.256789497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.217251 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-f2kkh" podStartSLOduration=74.217232963 podStartE2EDuration="1m14.217232963s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.193276997 +0000 UTC m=+96.830054387" watchObservedRunningTime="2026-01-26 00:09:15.217232963 +0000 UTC m=+96.854010353" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.237513 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.238019 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.738006653 +0000 UTC m=+97.374784043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.339716 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.340035 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.840019552 +0000 UTC m=+97.476796942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.340657 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg"] Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.447947 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jwzr6" podStartSLOduration=74.447927737 podStartE2EDuration="1m14.447927737s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.446283988 +0000 UTC m=+97.083061378" watchObservedRunningTime="2026-01-26 00:09:15.447927737 +0000 UTC m=+97.084705127" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.448963 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" podStartSLOduration=74.448957078 podStartE2EDuration="1m14.448957078s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.34094507 +0000 UTC m=+96.977722480" watchObservedRunningTime="2026-01-26 00:09:15.448957078 +0000 UTC m=+97.085734468" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.451562 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.451840 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:15.951828304 +0000 UTC m=+97.588605694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.501803 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9f8xv"] Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.552833 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.553298 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.053280896 +0000 UTC m=+97.690058286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.555109 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-ng9sq"] Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.610315 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" podStartSLOduration=75.61030196 podStartE2EDuration="1m15.61030196s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.607912839 +0000 UTC m=+97.244690229" watchObservedRunningTime="2026-01-26 00:09:15.61030196 +0000 UTC m=+97.247079350" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.654666 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.654971 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.154959144 +0000 UTC m=+97.791736524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.659627 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" event={"ID":"3979fd71-42bb-4ff6-a978-1b5d86c3a1e2","Type":"ContainerStarted","Data":"90294b0840cdf97ae2e152dc558a9093f5d637ff64b57add577c0f45e8fd6623"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.660463 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.661645 4697 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bwtmw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.661683 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" podUID="3979fd71-42bb-4ff6-a978-1b5d86c3a1e2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.677420 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" event={"ID":"ad1c2901-7d74-433a-a5d2-12627b087bf2","Type":"ContainerStarted","Data":"5af1bbb5caf68fe485034d207d583d51fbdc5b47357ff1d76921d76e841db8a0"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.679727 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" podStartSLOduration=74.679707134 podStartE2EDuration="1m14.679707134s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.67824149 +0000 UTC m=+97.315018870" watchObservedRunningTime="2026-01-26 00:09:15.679707134 +0000 UTC m=+97.316484524" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.704345 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" event={"ID":"fcc6c22d-7726-4a91-85b3-458926ae3613","Type":"ContainerStarted","Data":"b5ecb59027acd34a274349dd11a4e4584b1b7090a1e90f708ac275f31c2c9ef3"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.706458 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xlxqs" event={"ID":"006b7315-1211-44f3-950d-2d9a74c2be04","Type":"ContainerStarted","Data":"0ba4a3378684cbeea2493b1c7a11907587898caecbc2eaf34bc153ee290a686e"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.706512 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xlxqs" event={"ID":"006b7315-1211-44f3-950d-2d9a74c2be04","Type":"ContainerStarted","Data":"c5cecdb25b8f5c823a4ea2dedc8bbe8d470708b88060a9a6089aa00ef382d4d7"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.716489 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" event={"ID":"68aa7031-ccd9-4123-8158-1eec53aafd9a","Type":"ContainerStarted","Data":"b51abb7093b445f5c399b7e42f7b217e5853aa866d8d9c28e80de1facf15c5da"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.732530 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" event={"ID":"814f45b8-36a6-49e8-adda-91191ea0cedc","Type":"ContainerStarted","Data":"0ce42bc090e28c6395b4771c45f5422c1c5bc6129791ff23bf151773cbc72e93"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.741626 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29489760-4chhm" podStartSLOduration=75.741608344 podStartE2EDuration="1m15.741608344s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.738709718 +0000 UTC m=+97.375487108" watchObservedRunningTime="2026-01-26 00:09:15.741608344 +0000 UTC m=+97.378385734" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.753441 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" event={"ID":"be05e353-5f84-4beb-9f70-959589984e32","Type":"ContainerStarted","Data":"1450cbb309716c96217b3fdf2c6ff5b3f8ad2c24bedbc017a98cdf055a7fb503"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.756446 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.757768 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.257743386 +0000 UTC m=+97.894520836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.760293 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" event={"ID":"45c0700e-0c45-42eb-a253-c99680b07bd9","Type":"ContainerStarted","Data":"b6eb414f03db6f8c5500b3f81b085dad4c431110fee0f09e23f185ef19e6c41a"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.768493 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" event={"ID":"09ef18a7-b302-478d-8d37-1be66e4c6886","Type":"ContainerStarted","Data":"5a4fc54911e6a9cede408b5417a6cd30474b977fe994574a92109bd5b6d49ac4"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.769038 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" podStartSLOduration=74.769028034 podStartE2EDuration="1m14.769028034s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.768419346 +0000 UTC m=+97.405196736" watchObservedRunningTime="2026-01-26 00:09:15.769028034 +0000 UTC m=+97.405805424" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.771743 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" event={"ID":"423e0d18-f30f-42d7-987a-90bbb521a550","Type":"ContainerStarted","Data":"68f95e1328ff488535cae4fb17a41acd645feb692a9d5af94cae3d8a1d7a36c9"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.777154 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" event={"ID":"578353c6-8556-40f0-976e-3cde8cdf52c5","Type":"ContainerStarted","Data":"4ce76f3823f215ccf7732335ca62f4327f1ba146741a22985f51d4f2cfcaa2f9"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.777213 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" event={"ID":"578353c6-8556-40f0-976e-3cde8cdf52c5","Type":"ContainerStarted","Data":"b6a35af29f62c951dbe04b0c6c432dd6079faabff70faf9ad836d34e3ad548c0"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.784764 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" event={"ID":"1d00cd23-9f79-4b17-9c20-006b33bf7b9e","Type":"ContainerStarted","Data":"effd7a400c81086628dea527f41f0a9078138bc45c522830c23047b533b62650"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.787583 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" event={"ID":"b00d79f8-1314-4770-b195-c8e425473239","Type":"ContainerStarted","Data":"37aa1b1ebf1fea77b5c3080f470fb19e9d620553d7ed2101a497c35b2fc171d1"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.788376 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.789805 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" event={"ID":"62705bdc-1645-4a2b-b385-e089933f0f9f","Type":"ContainerStarted","Data":"68c396f4c49f968d65cf02ffadea71e21776cf2fa2fe4fb66ccae7adea64c81d"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.792326 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xr4t2" event={"ID":"e548e601-d7aa-4a67-9a9b-14dd195fcd9e","Type":"ContainerStarted","Data":"dbdd42fc13f1e64d2e72429a579fa26ea7a7f72a22d6293f3b6ddb8c2023b050"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.792798 4697 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qsdk7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.792824 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" podUID="b00d79f8-1314-4770-b195-c8e425473239" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.793748 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" event={"ID":"b052f719-3be0-4fb7-8e99-714c703574bd","Type":"ContainerStarted","Data":"c1cb0808b433c822e7d619990b4dc6ca161ca235208291228d077697e419527d"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.794320 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" event={"ID":"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd","Type":"ContainerStarted","Data":"8f20cdb9a071054c6cc95117daa2a18bfb220678c6dbdf85541cee75b550e216"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.795064 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" event={"ID":"76990632-69a9-4ec8-a37a-a1c267223148","Type":"ContainerStarted","Data":"e64ad3c8b715b70cf2f26c72284b87c430877f5ee6370e9bde085799804ed597"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.795885 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lf9d6" event={"ID":"0524a60c-3f95-47f7-8191-003a1f00995d","Type":"ContainerStarted","Data":"bfb6f1b5d1a1e8e74c9e38788f15f020e6a33d029f0efc5555067eaca63cb9ed"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.797037 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" event={"ID":"e715c38e-1f2a-4fe5-8e19-1a979a02a51a","Type":"ContainerStarted","Data":"aa61239d604cd6964cb458eae949c2b62bc1cd26f184740bbe6c46f5436b702e"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.798204 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jwzr6" event={"ID":"50f20a9a-ecce-45dd-9377-916c0a0ea723","Type":"ContainerStarted","Data":"28c926c18c5f51571538d362316bee4be572519b128c560f0f6bea1657122ee0"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.799743 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" event={"ID":"5bc76051-c9b3-463c-b63a-01635555e888","Type":"ContainerStarted","Data":"7175556b4ff1ff7763b54115ed267fd7b2f300e0727fce0c0a92a64b654a6437"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.799774 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" event={"ID":"5bc76051-c9b3-463c-b63a-01635555e888","Type":"ContainerStarted","Data":"494bc9bce0b2975063c31340f090fd77258db606f241cdc1b4a05fb9a8b8503b"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.804853 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" event={"ID":"15fbfbf7-36fe-45c2-b722-4bb67e28d89f","Type":"ContainerStarted","Data":"9955efd66e7e8cd7cdeaa4c0cea6b6b759e67840a7ad17f32c9a8e515ac096a5"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.804909 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" event={"ID":"15fbfbf7-36fe-45c2-b722-4bb67e28d89f","Type":"ContainerStarted","Data":"2e4b43ec926df83b52178d32b17a93e4ffc9a2e8d611de8eada7a7cc39af8c0e"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.805377 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.807227 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" event={"ID":"84372a82-1d0b-459d-b44c-f8a754c4bf58","Type":"ContainerStarted","Data":"6c16c56be1f7c489a1b48c2f6fbb97731939119fb2673966bd55663e8cfcc961"} Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.815311 4697 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pjgvp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.815362 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" podUID="15fbfbf7-36fe-45c2-b722-4bb67e28d89f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.815842 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.815861 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.860592 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.860890 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.360880799 +0000 UTC m=+97.997658189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.892820 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" podStartSLOduration=74.892802423 podStartE2EDuration="1m14.892802423s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.822573834 +0000 UTC m=+97.459351224" watchObservedRunningTime="2026-01-26 00:09:15.892802423 +0000 UTC m=+97.529579813" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.893686 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-sljx9" podStartSLOduration=75.893681329 podStartE2EDuration="1m15.893681329s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.887709861 +0000 UTC m=+97.524487251" watchObservedRunningTime="2026-01-26 00:09:15.893681329 +0000 UTC m=+97.530458719" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.942481 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:15 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:15 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:15 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.942541 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.962019 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:15 crc kubenswrapper[4697]: E0126 00:09:15.964549 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.464529467 +0000 UTC m=+98.101306857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:15 crc kubenswrapper[4697]: I0126 00:09:15.975554 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xlxqs" podStartSLOduration=6.975535816 podStartE2EDuration="6.975535816s" podCreationTimestamp="2026-01-26 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:15.974909157 +0000 UTC m=+97.611686547" watchObservedRunningTime="2026-01-26 00:09:15.975535816 +0000 UTC m=+97.612313206" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.068736 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:16 crc kubenswrapper[4697]: E0126 00:09:16.070023 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.570011019 +0000 UTC m=+98.206788399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.091063 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gh82v" podStartSLOduration=75.091043218 podStartE2EDuration="1m15.091043218s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:16.023802688 +0000 UTC m=+97.660580078" watchObservedRunningTime="2026-01-26 00:09:16.091043218 +0000 UTC m=+97.727820608" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.174786 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:16 crc kubenswrapper[4697]: E0126 00:09:16.175153 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.675132441 +0000 UTC m=+98.311909831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.275984 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:16 crc kubenswrapper[4697]: E0126 00:09:16.278294 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:16.778276953 +0000 UTC m=+98.415054343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.349630 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2pkdl" podStartSLOduration=75.349616766 podStartE2EDuration="1m15.349616766s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:16.115364485 +0000 UTC m=+97.752141905" watchObservedRunningTime="2026-01-26 00:09:16.349616766 +0000 UTC m=+97.986394156" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.350530 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lf9d6" podStartSLOduration=7.350522993 podStartE2EDuration="7.350522993s" podCreationTimestamp="2026-01-26 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:16.347611646 +0000 UTC m=+97.984389036" watchObservedRunningTime="2026-01-26 00:09:16.350522993 +0000 UTC m=+97.987300383" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.464678 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.465156 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.609890 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:16 crc kubenswrapper[4697]: E0126 00:09:16.610255 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.110229604 +0000 UTC m=+98.747007064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.688284 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.711485 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:16 crc kubenswrapper[4697]: E0126 00:09:16.732503 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.232477918 +0000 UTC m=+98.869255308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.860791 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:16 crc kubenswrapper[4697]: E0126 00:09:16.862321 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.362302498 +0000 UTC m=+98.999079898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.959062 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:16 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:16 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:16 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.959140 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" event={"ID":"68aa7031-ccd9-4123-8158-1eec53aafd9a","Type":"ContainerStarted","Data":"aef3858defa6f04ae7ae5d2b839cad95324bfa775773d0eb59a8b2b43f8b69aa"} Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.959141 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.961551 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x98gl" podStartSLOduration=75.961540594 podStartE2EDuration="1m15.961540594s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:16.805271793 +0000 UTC m=+98.442049193" watchObservedRunningTime="2026-01-26 00:09:16.961540594 +0000 UTC m=+98.598317984" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.961620 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-bj95q" podStartSLOduration=75.961616506 podStartE2EDuration="1m15.961616506s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:16.959559845 +0000 UTC m=+98.596337235" watchObservedRunningTime="2026-01-26 00:09:16.961616506 +0000 UTC m=+98.598393896" Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.963093 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:16 crc kubenswrapper[4697]: E0126 00:09:16.963342 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.463331417 +0000 UTC m=+99.100108807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.974306 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" event={"ID":"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a","Type":"ContainerStarted","Data":"64cc6208d47d4193953a07a33c649cc09dbb948feabe2035e5bcf729d9c08c37"} Jan 26 00:09:16 crc kubenswrapper[4697]: I0126 00:09:16.987707 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ng9sq" event={"ID":"33ad7749-5e10-4f00-8373-681ba35a6b4f","Type":"ContainerStarted","Data":"ddea5c08f7023768b4a30ab9432847d2646797b2710ed6461176be5ba1aacb55"} Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.015251 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" podStartSLOduration=76.015233589 podStartE2EDuration="1m16.015233589s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:17.009298151 +0000 UTC m=+98.646075541" watchObservedRunningTime="2026-01-26 00:09:17.015233589 +0000 UTC m=+98.652010979" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.035000 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" event={"ID":"e715c38e-1f2a-4fe5-8e19-1a979a02a51a","Type":"ContainerStarted","Data":"cd8d9275140737e9ca24d581e21ff08e073f2174092dc23228adf2545463caf0"} Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.110255 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" event={"ID":"76990632-69a9-4ec8-a37a-a1c267223148","Type":"ContainerStarted","Data":"77263b8af5432e8915b1c927ec2b0417f409d82ff1aa72f493fdf72e436ae75a"} Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.112661 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.113526 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.613513186 +0000 UTC m=+99.250290576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.169615 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" podStartSLOduration=76.169601882 podStartE2EDuration="1m16.169601882s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:17.16751912 +0000 UTC m=+98.804296510" watchObservedRunningTime="2026-01-26 00:09:17.169601882 +0000 UTC m=+98.806379272" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.170329 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" podStartSLOduration=76.170322874 podStartE2EDuration="1m16.170322874s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:17.054584435 +0000 UTC m=+98.691361825" watchObservedRunningTime="2026-01-26 00:09:17.170322874 +0000 UTC m=+98.807100264" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.227875 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.228387 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.728371549 +0000 UTC m=+99.365148939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.231322 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" event={"ID":"b00d79f8-1314-4770-b195-c8e425473239","Type":"ContainerStarted","Data":"4a7a2e34cb638bfd57dd56dec54b517843521c9641d09830582b3430ed46797e"} Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.232225 4697 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qsdk7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.232256 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" podUID="b00d79f8-1314-4770-b195-c8e425473239" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.239403 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" event={"ID":"fc265a1a-21de-413a-91c5-f25de2e1f852","Type":"ContainerStarted","Data":"ea5c22e174b14c378d21478265d692043159ebce87ec20d609b4807ea715968b"} Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.246347 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" event={"ID":"759c72ce-4566-432c-9768-bfdfa5c6dc45","Type":"ContainerStarted","Data":"f943f5262f563faad1c7a86f68eb58f30a1742c2fb698cf255894707d799aaa0"} Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.330122 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.331433 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.831411978 +0000 UTC m=+99.468189378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.341291 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pcl4f" event={"ID":"b6158599-2f34-473b-a4c5-aa8a1d9c0f1b","Type":"ContainerStarted","Data":"8a2b50fd573dbe99b04ee909d0a4e54f6395c78f431c42a8febf3bd39a2c9c21"} Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.341775 4697 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bwtmw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.341820 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" podUID="3979fd71-42bb-4ff6-a978-1b5d86c3a1e2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.342539 4697 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pjgvp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.342579 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" podUID="15fbfbf7-36fe-45c2-b722-4bb67e28d89f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.350758 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-m2jbt" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.355474 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kdtgd" podStartSLOduration=76.355454697 podStartE2EDuration="1m16.355454697s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:17.354973972 +0000 UTC m=+98.991751362" watchObservedRunningTime="2026-01-26 00:09:17.355454697 +0000 UTC m=+98.992232087" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.384492 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.384553 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.405881 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8cg2f" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.442422 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.442735 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:17.942723205 +0000 UTC m=+99.579500595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.501030 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p7cg8" podStartSLOduration=76.501013897 podStartE2EDuration="1m16.501013897s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:17.500959515 +0000 UTC m=+99.137736905" watchObservedRunningTime="2026-01-26 00:09:17.501013897 +0000 UTC m=+99.137791287" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.543669 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.544918 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.044895148 +0000 UTC m=+99.681672528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.564876 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kd9d6" podStartSLOduration=76.564857525 podStartE2EDuration="1m16.564857525s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:17.5315889 +0000 UTC m=+99.168366290" watchObservedRunningTime="2026-01-26 00:09:17.564857525 +0000 UTC m=+99.201634915" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.647827 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.648265 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.148253217 +0000 UTC m=+99.785030607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.748522 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.749137 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.249118082 +0000 UTC m=+99.885895482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.849917 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.850296 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.350280855 +0000 UTC m=+99.987058245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.930679 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:17 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:17 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:17 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.930742 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.950653 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.950795 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.450766098 +0000 UTC m=+100.087543488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:17 crc kubenswrapper[4697]: I0126 00:09:17.950990 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:17 crc kubenswrapper[4697]: E0126 00:09:17.951329 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.451316294 +0000 UTC m=+100.088093684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.051384 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.051681 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.551666603 +0000 UTC m=+100.188443993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.152703 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.152998 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.652986421 +0000 UTC m=+100.289763811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.253689 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.253931 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.753902787 +0000 UTC m=+100.390680177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.386137 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.386573 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.886554431 +0000 UTC m=+100.523331821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.394552 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" event={"ID":"84372a82-1d0b-459d-b44c-f8a754c4bf58","Type":"ContainerStarted","Data":"f8b68c510b21d71e236ef9f0fa51475a4756ec43d3bedc60c3a2bba55871d38b"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.451598 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" event={"ID":"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b","Type":"ContainerStarted","Data":"c2d3480af52fd6696efa0afc88dde0b20ef9ea0624028e3758bdb5dfdd1f8c6c"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.451901 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" event={"ID":"d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b","Type":"ContainerStarted","Data":"983157405a0dd14f14ff5e5b5cc2e626a5e45d4e958a11c69ab14420092c673c"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.460861 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" event={"ID":"578353c6-8556-40f0-976e-3cde8cdf52c5","Type":"ContainerStarted","Data":"e78c81766036e9da63ff867ccba268ba34e8308b10506a4d353c48d9ebaf841e"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.466875 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" event={"ID":"68aa7031-ccd9-4123-8158-1eec53aafd9a","Type":"ContainerStarted","Data":"bdc7f3af96e974e0a9bc7ec0bb868642ede90b18cf4c5bbc9e5e71ce1e63e452"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.486866 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.487265 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.98723609 +0000 UTC m=+100.624013480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.487408 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.487975 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:18.987942641 +0000 UTC m=+100.624720031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.497769 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" event={"ID":"814f45b8-36a6-49e8-adda-91191ea0cedc","Type":"ContainerStarted","Data":"a09d67985709785d9f7a03d2cb5d4951eea06b5b4a3ac2094a6121e25c830bc1"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.497824 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" event={"ID":"814f45b8-36a6-49e8-adda-91191ea0cedc","Type":"ContainerStarted","Data":"51ed8c77dc73fcb7078b46c488e1c7a40442a5d63783abe41823f88bdc375857"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.513964 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ng9sq" event={"ID":"33ad7749-5e10-4f00-8373-681ba35a6b4f","Type":"ContainerStarted","Data":"42d9da23d97a4962465c2820a3ee7150a707f5cb7101f80268b7886f212d3f6b"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.519147 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" event={"ID":"fcc6c22d-7726-4a91-85b3-458926ae3613","Type":"ContainerStarted","Data":"75c39454ca3e69295fe0f2f7d06270b97d60366316a4713112178fb6d06b2fae"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.528575 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" event={"ID":"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd","Type":"ContainerStarted","Data":"d934adfe7095f1b28075d3fa7a7fed204915260b0a6bf47b38781140339d8ec7"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.528850 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" event={"ID":"dc6a6cb9-e61b-46b8-a7ba-10c2c047f7cd","Type":"ContainerStarted","Data":"a39fb2c7b2d77ae52f7b5c1b1b64a31fdd485b4390f2dbcd50c732e4df1a9d36"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.529214 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.532178 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" event={"ID":"423e0d18-f30f-42d7-987a-90bbb521a550","Type":"ContainerStarted","Data":"6362f79f62b7633b5aff7f5edde83a10c49a1deb0dca2772f6b902190da04654"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.554132 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" podStartSLOduration=78.554113309 podStartE2EDuration="1m18.554113309s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:18.552387987 +0000 UTC m=+100.189165377" watchObservedRunningTime="2026-01-26 00:09:18.554113309 +0000 UTC m=+100.190890699" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.555616 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" event={"ID":"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a","Type":"ContainerStarted","Data":"ba7403b9b3e81646d596b543e574b6f3855b17ac1b59e92a0e61f0f91ec33e50"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.556344 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.558106 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9f8xv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.558149 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.566341 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" event={"ID":"b052f719-3be0-4fb7-8e99-714c703574bd","Type":"ContainerStarted","Data":"42d142d45b3f490820f953eb1e7e7f8558b74864c13e16ffe616ba0f9f3badf5"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.589895 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.591081 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.091038122 +0000 UTC m=+100.727815522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.599131 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" podStartSLOduration=78.599114824 podStartE2EDuration="1m18.599114824s" podCreationTimestamp="2026-01-26 00:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:18.598475325 +0000 UTC m=+100.235252715" watchObservedRunningTime="2026-01-26 00:09:18.599114824 +0000 UTC m=+100.235892214" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.619006 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" event={"ID":"e715c38e-1f2a-4fe5-8e19-1a979a02a51a","Type":"ContainerStarted","Data":"1271428af7735a0b6547e14a335528e383dd321a3dece3a2ce79d0f77fbed661"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.625890 4697 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qsdk7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.625931 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" podUID="b00d79f8-1314-4770-b195-c8e425473239" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.626380 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" event={"ID":"45c0700e-0c45-42eb-a253-c99680b07bd9","Type":"ContainerStarted","Data":"139c0706ab52564f1f6673ca25e3c3db6dd0eee9cd547174217951a20feaeba0"} Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.643064 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-52f9l" podStartSLOduration=77.643049187 podStartE2EDuration="1m17.643049187s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:18.637617535 +0000 UTC m=+100.274394925" watchObservedRunningTime="2026-01-26 00:09:18.643049187 +0000 UTC m=+100.279826577" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.657788 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trthv" podStartSLOduration=77.657770557 podStartE2EDuration="1m17.657770557s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:18.655322564 +0000 UTC m=+100.292099974" watchObservedRunningTime="2026-01-26 00:09:18.657770557 +0000 UTC m=+100.294547947" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.684139 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pjgvp" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.697646 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.700238 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.200224486 +0000 UTC m=+100.837001876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.720012 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-brqbv" podStartSLOduration=77.719993386 podStartE2EDuration="1m17.719993386s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:18.695943358 +0000 UTC m=+100.332720748" watchObservedRunningTime="2026-01-26 00:09:18.719993386 +0000 UTC m=+100.356770776" Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.886712 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:18 crc kubenswrapper[4697]: E0126 00:09:18.887332 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.387315287 +0000 UTC m=+101.024092677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:18 crc kubenswrapper[4697]: I0126 00:09:18.892295 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" podStartSLOduration=77.892272165 podStartE2EDuration="1m17.892272165s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:18.722306446 +0000 UTC m=+100.359083826" watchObservedRunningTime="2026-01-26 00:09:18.892272165 +0000 UTC m=+100.529049555" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.001743 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.002169 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.502153709 +0000 UTC m=+101.138931099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.034810 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:19 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:19 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:19 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.034870 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.094335 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" podStartSLOduration=78.094306794 podStartE2EDuration="1m18.094306794s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:19.025339502 +0000 UTC m=+100.662116892" watchObservedRunningTime="2026-01-26 00:09:19.094306794 +0000 UTC m=+100.731084184" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.095944 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dfc9g" podStartSLOduration=78.095934152 podStartE2EDuration="1m18.095934152s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:19.093333864 +0000 UTC m=+100.730111264" watchObservedRunningTime="2026-01-26 00:09:19.095934152 +0000 UTC m=+100.732711542" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.103921 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.104232 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.604202269 +0000 UTC m=+101.240979659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.104491 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.104868 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.604856629 +0000 UTC m=+101.241634019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.212094 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.212558 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.712542357 +0000 UTC m=+101.349319747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.336782 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.337218 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.837207103 +0000 UTC m=+101.473984483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.438421 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.438707 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:19.938683606 +0000 UTC m=+101.575460996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.543230 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.543560 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.04354836 +0000 UTC m=+101.680325750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.557148 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-scx59" podStartSLOduration=78.557131986 podStartE2EDuration="1m18.557131986s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:19.156203773 +0000 UTC m=+100.792981163" watchObservedRunningTime="2026-01-26 00:09:19.557131986 +0000 UTC m=+101.193909376" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.644044 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-ng9sq" event={"ID":"33ad7749-5e10-4f00-8373-681ba35a6b4f","Type":"ContainerStarted","Data":"98d552176c1b66c2ec9c6179c515ef54e8eb7f0eca53ebc0403527c968944e92"} Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.644869 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.645534 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.645601 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.145586059 +0000 UTC m=+101.782363439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.645930 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.646563 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.146548568 +0000 UTC m=+101.783325958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.663304 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" event={"ID":"b052f719-3be0-4fb7-8e99-714c703574bd","Type":"ContainerStarted","Data":"ebfe6d99e5b526400dbce071314cce214ce375ae3ad79c85209dbcd6d0757e36"} Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.679294 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" podStartSLOduration=78.679278046 podStartE2EDuration="1m18.679278046s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:19.678155903 +0000 UTC m=+101.314933293" watchObservedRunningTime="2026-01-26 00:09:19.679278046 +0000 UTC m=+101.316055426" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.707649 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" event={"ID":"45c0700e-0c45-42eb-a253-c99680b07bd9","Type":"ContainerStarted","Data":"697511b20edbfe5b47e85989d8c076fd4c57a58575c36e5e8f1816c7b7e94c73"} Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.769876 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.769958 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.269943576 +0000 UTC m=+101.906720966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.770416 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.770740 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.270728449 +0000 UTC m=+101.907505839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.773305 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5n7g4" event={"ID":"1d00cd23-9f79-4b17-9c20-006b33bf7b9e","Type":"ContainerStarted","Data":"d2c9c9a6490b8610acc88a3bc05622d420365dd7fd67fb77ef8a76d17860fc17"} Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.777507 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9f8xv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.777554 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.778115 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-ng9sq" podStartSLOduration=10.77810209 podStartE2EDuration="10.77810209s" podCreationTimestamp="2026-01-26 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:19.778050278 +0000 UTC m=+101.414827658" watchObservedRunningTime="2026-01-26 00:09:19.77810209 +0000 UTC m=+101.414879480" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.871773 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.872369 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.372350377 +0000 UTC m=+102.009127767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.900490 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-kqzm4" podStartSLOduration=78.900473047 podStartE2EDuration="1m18.900473047s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:19.899728305 +0000 UTC m=+101.536505695" watchObservedRunningTime="2026-01-26 00:09:19.900473047 +0000 UTC m=+101.537250437" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.928893 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cj2ql" podStartSLOduration=78.928877416 podStartE2EDuration="1m18.928877416s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:19.922863996 +0000 UTC m=+101.559641396" watchObservedRunningTime="2026-01-26 00:09:19.928877416 +0000 UTC m=+101.565654806" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.942264 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:19 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:19 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:19 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.942351 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.974002 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.974134 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:19 crc kubenswrapper[4697]: E0126 00:09:19.976678 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.476659684 +0000 UTC m=+102.113437164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:19 crc kubenswrapper[4697]: I0126 00:09:19.998013 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfdab702-3a9b-4646-ad6b-9bb9404e92ad-metrics-certs\") pod \"network-metrics-daemon-xctft\" (UID: \"dfdab702-3a9b-4646-ad6b-9bb9404e92ad\") " pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.075940 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.076311 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.576295872 +0000 UTC m=+102.213073262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.152060 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kxm8l"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.152933 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.154895 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.172154 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxm8l"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.176952 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.177243 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.677231379 +0000 UTC m=+102.314008769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.278760 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.279167 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.779056302 +0000 UTC m=+102.415833692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.279243 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.279273 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-utilities\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.279304 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxpv\" (UniqueName: \"kubernetes.io/projected/52a87d4f-2b9b-44c1-9457-cedcf68d8819-kube-api-access-nlxpv\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.279335 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-catalog-content\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.283189 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.783162954 +0000 UTC m=+102.419940344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.293951 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xctft" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.359588 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w4m7r"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.360742 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.365065 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.380683 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.380995 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-catalog-content\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.381051 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-utilities\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.381098 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-utilities\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.381114 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfhv\" (UniqueName: \"kubernetes.io/projected/e2f51327-b54f-430d-8728-302b40279d68-kube-api-access-hvfhv\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.381132 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlxpv\" (UniqueName: \"kubernetes.io/projected/52a87d4f-2b9b-44c1-9457-cedcf68d8819-kube-api-access-nlxpv\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.381159 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-catalog-content\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.381845 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-catalog-content\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.381920 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.881903395 +0000 UTC m=+102.518680785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.382184 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-utilities\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.390418 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4m7r"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.408554 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlxpv\" (UniqueName: \"kubernetes.io/projected/52a87d4f-2b9b-44c1-9457-cedcf68d8819-kube-api-access-nlxpv\") pod \"community-operators-kxm8l\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.446709 4697 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.466994 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.481926 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-catalog-content\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.481983 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.482031 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-utilities\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.482056 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfhv\" (UniqueName: \"kubernetes.io/projected/e2f51327-b54f-430d-8728-302b40279d68-kube-api-access-hvfhv\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.482577 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-catalog-content\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.482699 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:20.982681817 +0000 UTC m=+102.619459267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.487643 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-utilities\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.522113 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfhv\" (UniqueName: \"kubernetes.io/projected/e2f51327-b54f-430d-8728-302b40279d68-kube-api-access-hvfhv\") pod \"certified-operators-w4m7r\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.566369 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wklhb"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.567582 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.582418 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.582542 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmxw\" (UniqueName: \"kubernetes.io/projected/d7db0326-548c-4c19-86c3-15af398d39cb-kube-api-access-rcmxw\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.582564 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-utilities\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.582590 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-catalog-content\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.582721 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:21.082706277 +0000 UTC m=+102.719483667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.585906 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wklhb"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.677315 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.683584 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmxw\" (UniqueName: \"kubernetes.io/projected/d7db0326-548c-4c19-86c3-15af398d39cb-kube-api-access-rcmxw\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.683619 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-utilities\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.683663 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-catalog-content\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.683700 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.683990 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 00:09:21.183974833 +0000 UTC m=+102.820752223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xls7q" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.684318 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-utilities\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.684402 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-catalog-content\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.748299 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9f66"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.749595 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.830338 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9f66"] Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.831259 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:20 crc kubenswrapper[4697]: E0126 00:09:20.831898 4697 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 00:09:21.331879904 +0000 UTC m=+102.968657294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.879645 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmxw\" (UniqueName: \"kubernetes.io/projected/d7db0326-548c-4c19-86c3-15af398d39cb-kube-api-access-rcmxw\") pod \"community-operators-wklhb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.879720 4697 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T00:09:20.4469649Z","Handler":null,"Name":""} Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.894367 4697 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.894413 4697 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.913671 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" event={"ID":"b052f719-3be0-4fb7-8e99-714c703574bd","Type":"ContainerStarted","Data":"eb9d730125461e17490da3a3a10ff3c1cafc9c21be1c58e571e3b15929a589ed"} Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.914704 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9f8xv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.914743 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.934450 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.934954 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.935005 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-catalog-content\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.935026 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgrgk\" (UniqueName: \"kubernetes.io/projected/9cfec86d-c03e-4a9d-8571-a233cba73af1-kube-api-access-pgrgk\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.935048 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-utilities\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.973298 4697 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.973340 4697 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.979296 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:20 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:20 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:20 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:20 crc kubenswrapper[4697]: I0126 00:09:20.979344 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.035829 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-catalog-content\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.035903 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgrgk\" (UniqueName: \"kubernetes.io/projected/9cfec86d-c03e-4a9d-8571-a233cba73af1-kube-api-access-pgrgk\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.035949 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-utilities\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.037486 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-catalog-content\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.038307 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-utilities\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.071116 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgrgk\" (UniqueName: \"kubernetes.io/projected/9cfec86d-c03e-4a9d-8571-a233cba73af1-kube-api-access-pgrgk\") pod \"certified-operators-p9f66\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.159647 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xctft"] Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.187236 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xls7q\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.199999 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.208391 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.240858 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.317518 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.322736 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxm8l"] Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.491718 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.494346 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.498275 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w4m7r"] Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.605768 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.622055 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.598745 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.622561 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.731954 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wklhb"] Jan 26 00:09:21 crc kubenswrapper[4697]: W0126 00:09:21.766600 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7db0326_548c_4c19_86c3_15af398d39cb.slice/crio-61b85b31675fbf15d96b82a5c489f56c3e2478ea4ac8cf7a9ccf7f070d6e40a6 WatchSource:0}: Error finding container 61b85b31675fbf15d96b82a5c489f56c3e2478ea4ac8cf7a9ccf7f070d6e40a6: Status 404 returned error can't find the container with id 61b85b31675fbf15d96b82a5c489f56c3e2478ea4ac8cf7a9ccf7f070d6e40a6 Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.788224 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9f66"] Jan 26 00:09:21 crc kubenswrapper[4697]: W0126 00:09:21.800857 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cfec86d_c03e_4a9d_8571_a233cba73af1.slice/crio-16e41de3d60b1a550d0fd6ff6d7d255ebe0ac422fea01e2f4e20501a701654e9 WatchSource:0}: Error finding container 16e41de3d60b1a550d0fd6ff6d7d255ebe0ac422fea01e2f4e20501a701654e9: Status 404 returned error can't find the container with id 16e41de3d60b1a550d0fd6ff6d7d255ebe0ac422fea01e2f4e20501a701654e9 Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.865851 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.866171 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.867637 4697 patch_prober.go:28] interesting pod/console-f9d7485db-jwzr6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.867673 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jwzr6" podUID="50f20a9a-ecce-45dd-9377-916c0a0ea723" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.920168 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xls7q"] Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.925979 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:21 crc kubenswrapper[4697]: W0126 00:09:21.929666 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d46a4f5_ef4e_4ce6_b74b_33de51e67f64.slice/crio-d0264abca61c04c56bfe8eccceeaf8a17d370fab76961ac34d45f5b801b9bf6f WatchSource:0}: Error finding container d0264abca61c04c56bfe8eccceeaf8a17d370fab76961ac34d45f5b801b9bf6f: Status 404 returned error can't find the container with id d0264abca61c04c56bfe8eccceeaf8a17d370fab76961ac34d45f5b801b9bf6f Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.930461 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wklhb" event={"ID":"d7db0326-548c-4c19-86c3-15af398d39cb","Type":"ContainerStarted","Data":"61b85b31675fbf15d96b82a5c489f56c3e2478ea4ac8cf7a9ccf7f070d6e40a6"} Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.937290 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:21 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:21 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:21 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.937347 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.947605 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xctft" event={"ID":"dfdab702-3a9b-4646-ad6b-9bb9404e92ad","Type":"ContainerStarted","Data":"77623027a7d35ead26f52c110d448fbc717a8e6e4827110a764c2c0b4159e23f"} Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.958640 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerStarted","Data":"e3e67df37a665f964f9b59f7f3f016ff8206b4702b3820a1625625dc2aa39877"} Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.958683 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerStarted","Data":"540292d8a0ec28657ee310ee0c7a1df0b1e7b59b04a96c28ebd3824dd1991423"} Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.975914 4697 generic.go:334] "Generic (PLEG): container finished" podID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerID="34861e47f8b18b9c34ba6432d8725d5ca460cfe4814df1877e75bea9f24e4bcf" exitCode=0 Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.975958 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxm8l" event={"ID":"52a87d4f-2b9b-44c1-9457-cedcf68d8819","Type":"ContainerDied","Data":"34861e47f8b18b9c34ba6432d8725d5ca460cfe4814df1877e75bea9f24e4bcf"} Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.976005 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxm8l" event={"ID":"52a87d4f-2b9b-44c1-9457-cedcf68d8819","Type":"ContainerStarted","Data":"c50e9c567aedcf7d9550aec693d557271269682e8a061bafeaacc70504cd10c0"} Jan 26 00:09:21 crc kubenswrapper[4697]: I0126 00:09:21.984802 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f66" event={"ID":"9cfec86d-c03e-4a9d-8571-a233cba73af1","Type":"ContainerStarted","Data":"16e41de3d60b1a550d0fd6ff6d7d255ebe0ac422fea01e2f4e20501a701654e9"} Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.010949 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.014182 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" event={"ID":"b052f719-3be0-4fb7-8e99-714c703574bd","Type":"ContainerStarted","Data":"78147a8f6b42356d85177dc1fc45092adec931a854a41251c9c7203202768da1"} Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.340105 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-cj44g" podStartSLOduration=13.340088618 podStartE2EDuration="13.340088618s" podCreationTimestamp="2026-01-26 00:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:22.044769552 +0000 UTC m=+103.681546952" watchObservedRunningTime="2026-01-26 00:09:22.340088618 +0000 UTC m=+103.976866008" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.341183 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xbn5x"] Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.342455 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.344107 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.358094 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbn5x"] Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.369709 4697 patch_prober.go:28] interesting pod/apiserver-76f77b778f-dsh82 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]log ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]etcd ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/max-in-flight-filter ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 00:09:22 crc kubenswrapper[4697]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 00:09:22 crc kubenswrapper[4697]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/openshift.io-startinformers ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 00:09:22 crc kubenswrapper[4697]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 00:09:22 crc kubenswrapper[4697]: livez check failed Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.370588 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" podUID="d5bd2b61-9ef8-44d5-ab36-a0f4ab462c5b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.392418 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bwtmw" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.435174 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db2c9\" (UniqueName: \"kubernetes.io/projected/292243f2-7308-454f-8d48-a9b408fb2bd5-kube-api-access-db2c9\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.435221 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-catalog-content\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.435291 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-utilities\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.439804 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.440631 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.443691 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.443951 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.448143 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.570541 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.570643 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db2c9\" (UniqueName: \"kubernetes.io/projected/292243f2-7308-454f-8d48-a9b408fb2bd5-kube-api-access-db2c9\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.570661 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-catalog-content\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.570688 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.570728 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-utilities\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.571204 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-utilities\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.571682 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-catalog-content\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.673741 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.673824 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.673906 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.682114 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.697233 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.725388 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.727868 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db2c9\" (UniqueName: \"kubernetes.io/projected/292243f2-7308-454f-8d48-a9b408fb2bd5-kube-api-access-db2c9\") pod \"redhat-marketplace-xbn5x\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.740371 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qsdk7" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.747253 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s7lfb"] Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.748305 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.760257 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.792951 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7lfb"] Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.887616 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-catalog-content\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.887975 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-utilities\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.888880 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxg5m\" (UniqueName: \"kubernetes.io/projected/c086f88d-6f74-44d5-9728-f59ebcec3dce-kube-api-access-rxg5m\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.937315 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:22 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:22 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:22 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.937367 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.958937 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.989986 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-utilities\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.990031 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxg5m\" (UniqueName: \"kubernetes.io/projected/c086f88d-6f74-44d5-9728-f59ebcec3dce-kube-api-access-rxg5m\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.990097 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-catalog-content\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.990527 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-utilities\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:22 crc kubenswrapper[4697]: I0126 00:09:22.990549 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-catalog-content\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.010765 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxg5m\" (UniqueName: \"kubernetes.io/projected/c086f88d-6f74-44d5-9728-f59ebcec3dce-kube-api-access-rxg5m\") pod \"redhat-marketplace-s7lfb\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.029156 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2f51327-b54f-430d-8728-302b40279d68" containerID="e3e67df37a665f964f9b59f7f3f016ff8206b4702b3820a1625625dc2aa39877" exitCode=0 Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.029225 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerDied","Data":"e3e67df37a665f964f9b59f7f3f016ff8206b4702b3820a1625625dc2aa39877"} Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.030797 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" event={"ID":"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64","Type":"ContainerStarted","Data":"f2f14fa6175b289f5148ace418e34e560bdc1710b722e6e19563cd8c85c6cf06"} Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.030821 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" event={"ID":"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64","Type":"ContainerStarted","Data":"d0264abca61c04c56bfe8eccceeaf8a17d370fab76961ac34d45f5b801b9bf6f"} Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.031830 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.034261 4697 generic.go:334] "Generic (PLEG): container finished" podID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerID="1890c8635442c2ced8baec6223762103d6077b57226f1a14d9511aadea78057e" exitCode=0 Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.034419 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f66" event={"ID":"9cfec86d-c03e-4a9d-8571-a233cba73af1","Type":"ContainerDied","Data":"1890c8635442c2ced8baec6223762103d6077b57226f1a14d9511aadea78057e"} Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.039574 4697 generic.go:334] "Generic (PLEG): container finished" podID="d7db0326-548c-4c19-86c3-15af398d39cb" containerID="3c176882c25a0bbff9627639af94549d03e7d05bf0efc766947a5b0354193b2b" exitCode=0 Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.039659 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wklhb" event={"ID":"d7db0326-548c-4c19-86c3-15af398d39cb","Type":"ContainerDied","Data":"3c176882c25a0bbff9627639af94549d03e7d05bf0efc766947a5b0354193b2b"} Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.064678 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xctft" event={"ID":"dfdab702-3a9b-4646-ad6b-9bb9404e92ad","Type":"ContainerStarted","Data":"f24b5da44028d8042ff7cb111aa27fa969b4d7ca11b334a6d7c2d738d06980d5"} Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.064721 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xctft" event={"ID":"dfdab702-3a9b-4646-ad6b-9bb9404e92ad","Type":"ContainerStarted","Data":"c3e0191e0f8cdeb2a4d1c2985431c2d469579652ff22b4cb393a863c254cdb88"} Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.101022 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" podStartSLOduration=82.101003019 podStartE2EDuration="1m22.101003019s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:23.066600351 +0000 UTC m=+104.703377741" watchObservedRunningTime="2026-01-26 00:09:23.101003019 +0000 UTC m=+104.737780409" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.125393 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.143103 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xctft" podStartSLOduration=82.143050666 podStartE2EDuration="1m22.143050666s" podCreationTimestamp="2026-01-26 00:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:23.141125078 +0000 UTC m=+104.777902468" watchObservedRunningTime="2026-01-26 00:09:23.143050666 +0000 UTC m=+104.779828056" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.199673 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.279639 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbn5x"] Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.348724 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tp2xk"] Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.351341 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.354542 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.358663 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xk"] Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.473705 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7lfb"] Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.504528 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-catalog-content\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.504609 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-utilities\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.504686 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99ntm\" (UniqueName: \"kubernetes.io/projected/8355e146-dafa-45db-85a5-b1534eeb6b53-kube-api-access-99ntm\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.609295 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-catalog-content\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.610497 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-catalog-content\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.611007 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-utilities\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.611148 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99ntm\" (UniqueName: \"kubernetes.io/projected/8355e146-dafa-45db-85a5-b1534eeb6b53-kube-api-access-99ntm\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.611449 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-utilities\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.644253 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99ntm\" (UniqueName: \"kubernetes.io/projected/8355e146-dafa-45db-85a5-b1534eeb6b53-kube-api-access-99ntm\") pod \"redhat-operators-tp2xk\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.704239 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.743257 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82kcj"] Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.744461 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.754118 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82kcj"] Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.812705 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-utilities\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.812757 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-catalog-content\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.812789 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rqr\" (UniqueName: \"kubernetes.io/projected/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-kube-api-access-l8rqr\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.915048 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-utilities\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.915392 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-catalog-content\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.915439 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rqr\" (UniqueName: \"kubernetes.io/projected/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-kube-api-access-l8rqr\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.915930 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-utilities\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.916044 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-catalog-content\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.931329 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:23 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:23 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:23 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.931382 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:23 crc kubenswrapper[4697]: I0126 00:09:23.935796 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rqr\" (UniqueName: \"kubernetes.io/projected/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-kube-api-access-l8rqr\") pod \"redhat-operators-82kcj\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.041276 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xk"] Jan 26 00:09:24 crc kubenswrapper[4697]: W0126 00:09:24.049789 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8355e146_dafa_45db_85a5_b1534eeb6b53.slice/crio-bf1ee74837119e2e51a5116e53fcd5cf8b6228cf2e213a2fba387119b9b55fa5 WatchSource:0}: Error finding container bf1ee74837119e2e51a5116e53fcd5cf8b6228cf2e213a2fba387119b9b55fa5: Status 404 returned error can't find the container with id bf1ee74837119e2e51a5116e53fcd5cf8b6228cf2e213a2fba387119b9b55fa5 Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.071303 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerStarted","Data":"ec20413bb31325e3bc59205f089fbf255d865fef6be5b7a4450e02b1238da462"} Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.071351 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerStarted","Data":"908b11b8ed1289d616a3403ad342113b42e1396d338bc4087afd6b2ebe54842c"} Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.072688 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597","Type":"ContainerStarted","Data":"18c0916f36cd8d56a131cd12a90b65b4da785a30c63e1c692a0bdc4a04a5372f"} Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.074618 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7lfb" event={"ID":"c086f88d-6f74-44d5-9728-f59ebcec3dce","Type":"ContainerStarted","Data":"eac5549d0d7d01a7210cd121714d25d8bceb6f9e2637ee9efef41fb371a3e3fc"} Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.076297 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xk" event={"ID":"8355e146-dafa-45db-85a5-b1534eeb6b53","Type":"ContainerStarted","Data":"bf1ee74837119e2e51a5116e53fcd5cf8b6228cf2e213a2fba387119b9b55fa5"} Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.083232 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.565310 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82kcj"] Jan 26 00:09:24 crc kubenswrapper[4697]: W0126 00:09:24.596526 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcfffbc4_4576_4314_b1a2_b990bd8dfa28.slice/crio-38370a5b99825fc943405515409b981c1018e1305990d0c91a2f57786f586d5f WatchSource:0}: Error finding container 38370a5b99825fc943405515409b981c1018e1305990d0c91a2f57786f586d5f: Status 404 returned error can't find the container with id 38370a5b99825fc943405515409b981c1018e1305990d0c91a2f57786f586d5f Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.936015 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:24 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:24 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:24 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:24 crc kubenswrapper[4697]: I0126 00:09:24.936081 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.094773 4697 generic.go:334] "Generic (PLEG): container finished" podID="423e0d18-f30f-42d7-987a-90bbb521a550" containerID="6362f79f62b7633b5aff7f5edde83a10c49a1deb0dca2772f6b902190da04654" exitCode=0 Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.094825 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" event={"ID":"423e0d18-f30f-42d7-987a-90bbb521a550","Type":"ContainerDied","Data":"6362f79f62b7633b5aff7f5edde83a10c49a1deb0dca2772f6b902190da04654"} Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.096939 4697 generic.go:334] "Generic (PLEG): container finished" podID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerID="ec20413bb31325e3bc59205f089fbf255d865fef6be5b7a4450e02b1238da462" exitCode=0 Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.096977 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerDied","Data":"ec20413bb31325e3bc59205f089fbf255d865fef6be5b7a4450e02b1238da462"} Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.099336 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597","Type":"ContainerStarted","Data":"143d7183f77813d09bba3765fa28da2f87f7cdbaab1689551f044710bea20f1c"} Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.100777 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82kcj" event={"ID":"dcfffbc4-4576-4314-b1a2-b990bd8dfa28","Type":"ContainerStarted","Data":"38370a5b99825fc943405515409b981c1018e1305990d0c91a2f57786f586d5f"} Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.103928 4697 generic.go:334] "Generic (PLEG): container finished" podID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerID="a125d1d41e5e93a096e32cd085dcf1cdc575830bcf33edc5a5532b8b3b71a807" exitCode=0 Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.103971 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7lfb" event={"ID":"c086f88d-6f74-44d5-9728-f59ebcec3dce","Type":"ContainerDied","Data":"a125d1d41e5e93a096e32cd085dcf1cdc575830bcf33edc5a5532b8b3b71a807"} Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.120282 4697 generic.go:334] "Generic (PLEG): container finished" podID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerID="c5987dbb66286df85405c5886dd118adb83ef37328ef9d80fd891e6ab6aa67e1" exitCode=0 Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.121251 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xk" event={"ID":"8355e146-dafa-45db-85a5-b1534eeb6b53","Type":"ContainerDied","Data":"c5987dbb66286df85405c5886dd118adb83ef37328ef9d80fd891e6ab6aa67e1"} Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.164629 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.164610543 podStartE2EDuration="3.164610543s" podCreationTimestamp="2026-01-26 00:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:09:25.150679197 +0000 UTC m=+106.787456577" watchObservedRunningTime="2026-01-26 00:09:25.164610543 +0000 UTC m=+106.801387943" Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.978431 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:25 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:25 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:25 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:25 crc kubenswrapper[4697]: I0126 00:09:25.978692 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:26 crc kubenswrapper[4697]: I0126 00:09:26.128715 4697 generic.go:334] "Generic (PLEG): container finished" podID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerID="cee2ec6ca2169fb278c5ca4c0da6875fef00931acf272aeea9138cf10fa48740" exitCode=0 Jan 26 00:09:26 crc kubenswrapper[4697]: I0126 00:09:26.128828 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82kcj" event={"ID":"dcfffbc4-4576-4314-b1a2-b990bd8dfa28","Type":"ContainerDied","Data":"cee2ec6ca2169fb278c5ca4c0da6875fef00931acf272aeea9138cf10fa48740"} Jan 26 00:09:26 crc kubenswrapper[4697]: I0126 00:09:26.510322 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:26 crc kubenswrapper[4697]: I0126 00:09:26.515002 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-dsh82" Jan 26 00:09:26 crc kubenswrapper[4697]: I0126 00:09:26.932707 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:26 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:26 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:26 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:26 crc kubenswrapper[4697]: I0126 00:09:26.932771 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.130054 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.167322 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 00:09:27 crc kubenswrapper[4697]: E0126 00:09:27.167795 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="423e0d18-f30f-42d7-987a-90bbb521a550" containerName="collect-profiles" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.167812 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="423e0d18-f30f-42d7-987a-90bbb521a550" containerName="collect-profiles" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.167944 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="423e0d18-f30f-42d7-987a-90bbb521a550" containerName="collect-profiles" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.168316 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.170284 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.170422 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.182834 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.199079 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/423e0d18-f30f-42d7-987a-90bbb521a550-config-volume\") pod \"423e0d18-f30f-42d7-987a-90bbb521a550\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.199153 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/423e0d18-f30f-42d7-987a-90bbb521a550-secret-volume\") pod \"423e0d18-f30f-42d7-987a-90bbb521a550\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.199237 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzlk7\" (UniqueName: \"kubernetes.io/projected/423e0d18-f30f-42d7-987a-90bbb521a550-kube-api-access-xzlk7\") pod \"423e0d18-f30f-42d7-987a-90bbb521a550\" (UID: \"423e0d18-f30f-42d7-987a-90bbb521a550\") " Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.199508 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/958f105a-614b-436c-ae45-9b7844a8ec5d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.199546 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/958f105a-614b-436c-ae45-9b7844a8ec5d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.200166 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/423e0d18-f30f-42d7-987a-90bbb521a550-config-volume" (OuterVolumeSpecName: "config-volume") pod "423e0d18-f30f-42d7-987a-90bbb521a550" (UID: "423e0d18-f30f-42d7-987a-90bbb521a550"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.250135 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/423e0d18-f30f-42d7-987a-90bbb521a550-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "423e0d18-f30f-42d7-987a-90bbb521a550" (UID: "423e0d18-f30f-42d7-987a-90bbb521a550"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.253053 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" event={"ID":"423e0d18-f30f-42d7-987a-90bbb521a550","Type":"ContainerDied","Data":"68f95e1328ff488535cae4fb17a41acd645feb692a9d5af94cae3d8a1d7a36c9"} Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.253140 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f95e1328ff488535cae4fb17a41acd645feb692a9d5af94cae3d8a1d7a36c9" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.253196 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489760-2ms5w" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.260564 4697 generic.go:334] "Generic (PLEG): container finished" podID="da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597" containerID="143d7183f77813d09bba3765fa28da2f87f7cdbaab1689551f044710bea20f1c" exitCode=0 Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.261307 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597","Type":"ContainerDied","Data":"143d7183f77813d09bba3765fa28da2f87f7cdbaab1689551f044710bea20f1c"} Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.319656 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/958f105a-614b-436c-ae45-9b7844a8ec5d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.319697 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/958f105a-614b-436c-ae45-9b7844a8ec5d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.319871 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/423e0d18-f30f-42d7-987a-90bbb521a550-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.320103 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/958f105a-614b-436c-ae45-9b7844a8ec5d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.320130 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/423e0d18-f30f-42d7-987a-90bbb521a550-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.442623 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/423e0d18-f30f-42d7-987a-90bbb521a550-kube-api-access-xzlk7" (OuterVolumeSpecName: "kube-api-access-xzlk7") pod "423e0d18-f30f-42d7-987a-90bbb521a550" (UID: "423e0d18-f30f-42d7-987a-90bbb521a550"). InnerVolumeSpecName "kube-api-access-xzlk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.491769 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/958f105a-614b-436c-ae45-9b7844a8ec5d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.523615 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzlk7\" (UniqueName: \"kubernetes.io/projected/423e0d18-f30f-42d7-987a-90bbb521a550-kube-api-access-xzlk7\") on node \"crc\" DevicePath \"\"" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.620917 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.731461 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-ng9sq" Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.932238 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:27 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:27 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:27 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:27 crc kubenswrapper[4697]: I0126 00:09:27.932328 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:28 crc kubenswrapper[4697]: I0126 00:09:28.928829 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:28 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:28 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:28 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:28 crc kubenswrapper[4697]: I0126 00:09:28.930814 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:29 crc kubenswrapper[4697]: I0126 00:09:29.933385 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:29 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:29 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:29 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:29 crc kubenswrapper[4697]: I0126 00:09:29.933460 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:30 crc kubenswrapper[4697]: I0126 00:09:30.928298 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:30 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:30 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:30 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:30 crc kubenswrapper[4697]: I0126 00:09:30.928588 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.597679 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.597725 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.597753 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.597825 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.866796 4697 patch_prober.go:28] interesting pod/console-f9d7485db-jwzr6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.866855 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jwzr6" podUID="50f20a9a-ecce-45dd-9377-916c0a0ea723" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.931888 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:31 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:31 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:31 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:31 crc kubenswrapper[4697]: I0126 00:09:31.931951 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:32 crc kubenswrapper[4697]: I0126 00:09:32.944296 4697 patch_prober.go:28] interesting pod/router-default-5444994796-cqnrq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 00:09:32 crc kubenswrapper[4697]: [-]has-synced failed: reason withheld Jan 26 00:09:32 crc kubenswrapper[4697]: [+]process-running ok Jan 26 00:09:32 crc kubenswrapper[4697]: healthz check failed Jan 26 00:09:32 crc kubenswrapper[4697]: I0126 00:09:32.944680 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-cqnrq" podUID="d4687e4b-813c-425f-ac21-cc39b28872dd" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 00:09:33 crc kubenswrapper[4697]: I0126 00:09:33.931351 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:33 crc kubenswrapper[4697]: I0126 00:09:33.936685 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-cqnrq" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.214492 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.668198 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.669953 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.675832 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.675879 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.675917 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.676357 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"e53681783f46fa456f555a133885373766449c96e45bcd4aca4b399b47f8a589"} pod="openshift-console/downloads-7954f5f757-f2kkh" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.676424 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" containerID="cri-o://e53681783f46fa456f555a133885373766449c96e45bcd4aca4b399b47f8a589" gracePeriod=2 Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.676807 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.676825 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.870764 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:41 crc kubenswrapper[4697]: I0126 00:09:41.876063 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jwzr6" Jan 26 00:09:42 crc kubenswrapper[4697]: I0126 00:09:42.847586 4697 generic.go:334] "Generic (PLEG): container finished" podID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerID="e53681783f46fa456f555a133885373766449c96e45bcd4aca4b399b47f8a589" exitCode=0 Jan 26 00:09:42 crc kubenswrapper[4697]: I0126 00:09:42.847656 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f2kkh" event={"ID":"34ce2092-249b-4b00-8e7a-46fa672982f5","Type":"ContainerDied","Data":"e53681783f46fa456f555a133885373766449c96e45bcd4aca4b399b47f8a589"} Jan 26 00:09:51 crc kubenswrapper[4697]: I0126 00:09:51.598286 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:09:51 crc kubenswrapper[4697]: I0126 00:09:51.598908 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:09:52 crc kubenswrapper[4697]: I0126 00:09:52.730600 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-f62mg" Jan 26 00:09:55 crc kubenswrapper[4697]: I0126 00:09:55.941617 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597","Type":"ContainerDied","Data":"18c0916f36cd8d56a131cd12a90b65b4da785a30c63e1c692a0bdc4a04a5372f"} Jan 26 00:09:55 crc kubenswrapper[4697]: I0126 00:09:55.942205 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18c0916f36cd8d56a131cd12a90b65b4da785a30c63e1c692a0bdc4a04a5372f" Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.036065 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.229752 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kubelet-dir\") pod \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.229850 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kube-api-access\") pod \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\" (UID: \"da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597\") " Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.229851 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597" (UID: "da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.232060 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.237161 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597" (UID: "da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.332883 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:09:56 crc kubenswrapper[4697]: I0126 00:09:56.947252 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 00:09:57 crc kubenswrapper[4697]: I0126 00:09:57.955264 4697 generic.go:334] "Generic (PLEG): container finished" podID="7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa" containerID="94ec5d01946e2f05833faeb3dd3c73ca6048ce650bbc0d2a8cc187f1ea7f09bd" exitCode=0 Jan 26 00:09:57 crc kubenswrapper[4697]: I0126 00:09:57.955347 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-4chhm" event={"ID":"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa","Type":"ContainerDied","Data":"94ec5d01946e2f05833faeb3dd3c73ca6048ce650bbc0d2a8cc187f1ea7f09bd"} Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.367520 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 00:10:00 crc kubenswrapper[4697]: E0126 00:10:00.375701 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597" containerName="pruner" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.375736 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597" containerName="pruner" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.375861 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="da3c9a94-d0f9-4204-bb3b-0ab7a9b7a597" containerName="pruner" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.376384 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.378391 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.436940 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.437032 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.539607 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.540425 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.539789 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.577348 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:00 crc kubenswrapper[4697]: I0126 00:10:00.707530 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:01 crc kubenswrapper[4697]: I0126 00:10:01.597329 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:10:01 crc kubenswrapper[4697]: I0126 00:10:01.597475 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.700373 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.700715 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.700755 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.700781 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.702344 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.703030 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.703464 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.712864 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.717842 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.725125 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.725470 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.774336 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.979389 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.986781 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 00:10:04 crc kubenswrapper[4697]: I0126 00:10:04.994390 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.328810 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.328870 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.358973 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.360701 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.374287 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.427174 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.427285 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eeb0379c-8f6d-4151-8552-d891cf28c05b-kube-api-access\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.427344 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-var-lock\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.529374 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-var-lock\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.529532 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-var-lock\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.529561 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.529641 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eeb0379c-8f6d-4151-8552-d891cf28c05b-kube-api-access\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.529734 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.671470 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eeb0379c-8f6d-4151-8552-d891cf28c05b-kube-api-access\") pod \"installer-9-crc\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:06 crc kubenswrapper[4697]: I0126 00:10:06.965733 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:10:09 crc kubenswrapper[4697]: E0126 00:10:09.429339 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 00:10:09 crc kubenswrapper[4697]: E0126 00:10:09.429664 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxg5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s7lfb_openshift-marketplace(c086f88d-6f74-44d5-9728-f59ebcec3dce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:09 crc kubenswrapper[4697]: E0126 00:10:09.431411 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-s7lfb" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" Jan 26 00:10:10 crc kubenswrapper[4697]: E0126 00:10:10.687774 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s7lfb" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" Jan 26 00:10:10 crc kubenswrapper[4697]: E0126 00:10:10.751801 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 00:10:10 crc kubenswrapper[4697]: E0126 00:10:10.752398 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hvfhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-w4m7r_openshift-marketplace(e2f51327-b54f-430d-8728-302b40279d68): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:10 crc kubenswrapper[4697]: E0126 00:10:10.753789 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-w4m7r" podUID="e2f51327-b54f-430d-8728-302b40279d68" Jan 26 00:10:11 crc kubenswrapper[4697]: I0126 00:10:11.597883 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:10:11 crc kubenswrapper[4697]: I0126 00:10:11.598140 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:10:16 crc kubenswrapper[4697]: E0126 00:10:16.997770 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-w4m7r" podUID="e2f51327-b54f-430d-8728-302b40279d68" Jan 26 00:10:17 crc kubenswrapper[4697]: E0126 00:10:17.039260 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 00:10:17 crc kubenswrapper[4697]: E0126 00:10:17.039463 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8rqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-82kcj_openshift-marketplace(dcfffbc4-4576-4314-b1a2-b990bd8dfa28): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:17 crc kubenswrapper[4697]: E0126 00:10:17.040913 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-82kcj" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" Jan 26 00:10:17 crc kubenswrapper[4697]: E0126 00:10:17.086006 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 00:10:17 crc kubenswrapper[4697]: E0126 00:10:17.086251 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99ntm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tp2xk_openshift-marketplace(8355e146-dafa-45db-85a5-b1534eeb6b53): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:17 crc kubenswrapper[4697]: E0126 00:10:17.087479 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tp2xk" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.385351 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-82kcj" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.385928 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tp2xk" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.505091 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.505437 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-db2c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xbn5x_openshift-marketplace(292243f2-7308-454f-8d48-a9b408fb2bd5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.505890 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.506025 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nlxpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kxm8l_openshift-marketplace(52a87d4f-2b9b-44c1-9457-cedcf68d8819): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.506683 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xbn5x" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.508502 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kxm8l" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" Jan 26 00:10:18 crc kubenswrapper[4697]: I0126 00:10:18.541602 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.563812 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.564061 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pgrgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-p9f66_openshift-marketplace(9cfec86d-c03e-4a9d-8571-a233cba73af1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.565819 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-p9f66" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.590464 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.590645 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcmxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wklhb_openshift-marketplace(d7db0326-548c-4c19-86c3-15af398d39cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:10:18 crc kubenswrapper[4697]: E0126 00:10:18.592083 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wklhb" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" Jan 26 00:10:18 crc kubenswrapper[4697]: I0126 00:10:18.649503 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjq9t\" (UniqueName: \"kubernetes.io/projected/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-kube-api-access-wjq9t\") pod \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " Jan 26 00:10:18 crc kubenswrapper[4697]: I0126 00:10:18.650181 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-serviceca\") pod \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\" (UID: \"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa\") " Jan 26 00:10:18 crc kubenswrapper[4697]: I0126 00:10:18.651575 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-serviceca" (OuterVolumeSpecName: "serviceca") pod "7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa" (UID: "7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:10:18 crc kubenswrapper[4697]: I0126 00:10:18.672247 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-kube-api-access-wjq9t" (OuterVolumeSpecName: "kube-api-access-wjq9t") pod "7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa" (UID: "7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa"). InnerVolumeSpecName "kube-api-access-wjq9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:10:18 crc kubenswrapper[4697]: I0126 00:10:18.751830 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjq9t\" (UniqueName: \"kubernetes.io/projected/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-kube-api-access-wjq9t\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:18 crc kubenswrapper[4697]: I0126 00:10:18.751871 4697 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.061767 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.106644 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 00:10:19 crc kubenswrapper[4697]: W0126 00:10:19.127195 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod71a35080_724c_4ed9_ad2b_19fd6e14b3bf.slice/crio-d1a612a7fdf71add51c4dce4acab88795a58d8283befa8ede02375b24f8f5f28 WatchSource:0}: Error finding container d1a612a7fdf71add51c4dce4acab88795a58d8283befa8ede02375b24f8f5f28: Status 404 returned error can't find the container with id d1a612a7fdf71add51c4dce4acab88795a58d8283befa8ede02375b24f8f5f28 Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.139419 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f2kkh" event={"ID":"34ce2092-249b-4b00-8e7a-46fa672982f5","Type":"ContainerStarted","Data":"2e77da2cdedb94d2ec468ede8a9e0415919b9aea969d723f1b3f270cdd40ebd5"} Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.139840 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.140296 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.140336 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.141421 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71a35080-724c-4ed9-ad2b-19fd6e14b3bf","Type":"ContainerStarted","Data":"d1a612a7fdf71add51c4dce4acab88795a58d8283befa8ede02375b24f8f5f28"} Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.145999 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"958f105a-614b-436c-ae45-9b7844a8ec5d","Type":"ContainerStarted","Data":"ee93b9eff85397c31b94590b147726a5ad1c9f86c43b54187dcad5e18e9b58d0"} Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.149256 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.156113 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c07227267181d625d8d19fe6d0a66517aceb2d961e53b909ec5be110610696c7"} Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.173136 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29489760-4chhm" Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.173498 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29489760-4chhm" event={"ID":"7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa","Type":"ContainerDied","Data":"963548a00499ea9daf4cea57ae207876662fced667f4c8a7601cbb8956fe7303"} Jan 26 00:10:19 crc kubenswrapper[4697]: I0126 00:10:19.173539 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963548a00499ea9daf4cea57ae207876662fced667f4c8a7601cbb8956fe7303" Jan 26 00:10:19 crc kubenswrapper[4697]: E0126 00:10:19.209056 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xbn5x" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" Jan 26 00:10:19 crc kubenswrapper[4697]: E0126 00:10:19.209536 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wklhb" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" Jan 26 00:10:19 crc kubenswrapper[4697]: E0126 00:10:19.209657 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kxm8l" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" Jan 26 00:10:19 crc kubenswrapper[4697]: E0126 00:10:19.209749 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-p9f66" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" Jan 26 00:10:19 crc kubenswrapper[4697]: W0126 00:10:19.210661 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeeb0379c_8f6d_4151_8552_d891cf28c05b.slice/crio-47fcc26a0a9bbe5315ece35757fc4d23760ea2771db61f9fed4d11895dd68471 WatchSource:0}: Error finding container 47fcc26a0a9bbe5315ece35757fc4d23760ea2771db61f9fed4d11895dd68471: Status 404 returned error can't find the container with id 47fcc26a0a9bbe5315ece35757fc4d23760ea2771db61f9fed4d11895dd68471 Jan 26 00:10:19 crc kubenswrapper[4697]: W0126 00:10:19.311764 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f6509263603e71559d5f9d4c8e8bff2b1fa1921c62f57046641a7a4da784b916 WatchSource:0}: Error finding container f6509263603e71559d5f9d4c8e8bff2b1fa1921c62f57046641a7a4da784b916: Status 404 returned error can't find the container with id f6509263603e71559d5f9d4c8e8bff2b1fa1921c62f57046641a7a4da784b916 Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.181624 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7e1b7ad0397d238be252c62e6fda9a721003ab99eb971dde5f431b37659c1ad1"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.182253 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f6509263603e71559d5f9d4c8e8bff2b1fa1921c62f57046641a7a4da784b916"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.186875 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71a35080-724c-4ed9-ad2b-19fd6e14b3bf","Type":"ContainerStarted","Data":"8b72fac2f813cbe05e9b77c71a0b295540a2daff11f93df3c5c1ffed08e91f00"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.188798 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"958f105a-614b-436c-ae45-9b7844a8ec5d","Type":"ContainerStarted","Data":"195f99f3dc009546add8cf9aef105634d3914f00a3fe711a5652c252ad92967c"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.190788 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4a23e9612e1ced432d07e65aab4f361625f93e3412a7da00084925b104a4e66a"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.192158 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"155e416c0f0a04b7e9782e8434353cad4fb1b1fa84061fb75e98e796c74969ce"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.192230 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"373d03c34ffc19d248a0ce81cfb0796a7e9867a99652fa28687cb8dff1045268"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.192401 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.193818 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eeb0379c-8f6d-4151-8552-d891cf28c05b","Type":"ContainerStarted","Data":"788854723e67614f3debe570b72222d9a92c3bd1487f6979e7722312551238d0"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.193849 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eeb0379c-8f6d-4151-8552-d891cf28c05b","Type":"ContainerStarted","Data":"47fcc26a0a9bbe5315ece35757fc4d23760ea2771db61f9fed4d11895dd68471"} Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.194692 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.194769 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.252440 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=20.252419538 podStartE2EDuration="20.252419538s" podCreationTimestamp="2026-01-26 00:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:20.249480315 +0000 UTC m=+161.886257705" watchObservedRunningTime="2026-01-26 00:10:20.252419538 +0000 UTC m=+161.889196928" Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.282008 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=14.281982276 podStartE2EDuration="14.281982276s" podCreationTimestamp="2026-01-26 00:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:20.275524883 +0000 UTC m=+161.912302283" watchObservedRunningTime="2026-01-26 00:10:20.281982276 +0000 UTC m=+161.918759676" Jan 26 00:10:20 crc kubenswrapper[4697]: I0126 00:10:20.323342 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=53.323318128 podStartE2EDuration="53.323318128s" podCreationTimestamp="2026-01-26 00:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:10:20.31951979 +0000 UTC m=+161.956297180" watchObservedRunningTime="2026-01-26 00:10:20.323318128 +0000 UTC m=+161.960095518" Jan 26 00:10:20 crc kubenswrapper[4697]: E0126 00:10:20.432028 4697 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod958f105a_614b_436c_ae45_9b7844a8ec5d.slice/crio-195f99f3dc009546add8cf9aef105634d3914f00a3fe711a5652c252ad92967c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod71a35080_724c_4ed9_ad2b_19fd6e14b3bf.slice/crio-8b72fac2f813cbe05e9b77c71a0b295540a2daff11f93df3c5c1ffed08e91f00.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.200784 4697 generic.go:334] "Generic (PLEG): container finished" podID="71a35080-724c-4ed9-ad2b-19fd6e14b3bf" containerID="8b72fac2f813cbe05e9b77c71a0b295540a2daff11f93df3c5c1ffed08e91f00" exitCode=0 Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.200914 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71a35080-724c-4ed9-ad2b-19fd6e14b3bf","Type":"ContainerDied","Data":"8b72fac2f813cbe05e9b77c71a0b295540a2daff11f93df3c5c1ffed08e91f00"} Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.203594 4697 generic.go:334] "Generic (PLEG): container finished" podID="958f105a-614b-436c-ae45-9b7844a8ec5d" containerID="195f99f3dc009546add8cf9aef105634d3914f00a3fe711a5652c252ad92967c" exitCode=0 Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.203753 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"958f105a-614b-436c-ae45-9b7844a8ec5d","Type":"ContainerDied","Data":"195f99f3dc009546add8cf9aef105634d3914f00a3fe711a5652c252ad92967c"} Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.598083 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.598136 4697 patch_prober.go:28] interesting pod/downloads-7954f5f757-f2kkh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.598171 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:10:21 crc kubenswrapper[4697]: I0126 00:10:21.598188 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f2kkh" podUID="34ce2092-249b-4b00-8e7a-46fa672982f5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.567815 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.666027 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.706693 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kube-api-access\") pod \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.706754 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kubelet-dir\") pod \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\" (UID: \"71a35080-724c-4ed9-ad2b-19fd6e14b3bf\") " Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.706981 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "71a35080-724c-4ed9-ad2b-19fd6e14b3bf" (UID: "71a35080-724c-4ed9-ad2b-19fd6e14b3bf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.707345 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.733317 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "71a35080-724c-4ed9-ad2b-19fd6e14b3bf" (UID: "71a35080-724c-4ed9-ad2b-19fd6e14b3bf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.808710 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/958f105a-614b-436c-ae45-9b7844a8ec5d-kube-api-access\") pod \"958f105a-614b-436c-ae45-9b7844a8ec5d\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.808843 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/958f105a-614b-436c-ae45-9b7844a8ec5d-kubelet-dir\") pod \"958f105a-614b-436c-ae45-9b7844a8ec5d\" (UID: \"958f105a-614b-436c-ae45-9b7844a8ec5d\") " Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.808993 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/958f105a-614b-436c-ae45-9b7844a8ec5d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "958f105a-614b-436c-ae45-9b7844a8ec5d" (UID: "958f105a-614b-436c-ae45-9b7844a8ec5d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.809426 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/958f105a-614b-436c-ae45-9b7844a8ec5d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.809460 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71a35080-724c-4ed9-ad2b-19fd6e14b3bf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:22 crc kubenswrapper[4697]: I0126 00:10:22.955048 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958f105a-614b-436c-ae45-9b7844a8ec5d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "958f105a-614b-436c-ae45-9b7844a8ec5d" (UID: "958f105a-614b-436c-ae45-9b7844a8ec5d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:10:23 crc kubenswrapper[4697]: I0126 00:10:23.015242 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/958f105a-614b-436c-ae45-9b7844a8ec5d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:23 crc kubenswrapper[4697]: I0126 00:10:23.216234 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 00:10:23 crc kubenswrapper[4697]: I0126 00:10:23.216678 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"71a35080-724c-4ed9-ad2b-19fd6e14b3bf","Type":"ContainerDied","Data":"d1a612a7fdf71add51c4dce4acab88795a58d8283befa8ede02375b24f8f5f28"} Jan 26 00:10:23 crc kubenswrapper[4697]: I0126 00:10:23.216714 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1a612a7fdf71add51c4dce4acab88795a58d8283befa8ede02375b24f8f5f28" Jan 26 00:10:23 crc kubenswrapper[4697]: I0126 00:10:23.218187 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"958f105a-614b-436c-ae45-9b7844a8ec5d","Type":"ContainerDied","Data":"ee93b9eff85397c31b94590b147726a5ad1c9f86c43b54187dcad5e18e9b58d0"} Jan 26 00:10:23 crc kubenswrapper[4697]: I0126 00:10:23.218211 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee93b9eff85397c31b94590b147726a5ad1c9f86c43b54187dcad5e18e9b58d0" Jan 26 00:10:23 crc kubenswrapper[4697]: I0126 00:10:23.218241 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 00:10:31 crc kubenswrapper[4697]: I0126 00:10:31.228166 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9dqv8"] Jan 26 00:10:31 crc kubenswrapper[4697]: I0126 00:10:31.614329 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-f2kkh" Jan 26 00:10:36 crc kubenswrapper[4697]: I0126 00:10:36.328486 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:10:36 crc kubenswrapper[4697]: I0126 00:10:36.329001 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:10:55 crc kubenswrapper[4697]: I0126 00:10:55.033516 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.254535 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" podUID="c66eb9c1-ad69-4acc-8d3b-82050eee2656" containerName="oauth-openshift" containerID="cri-o://8958957ef01cd545876a77343a5a6193876d5be3180b9821a963d9f70e169870" gracePeriod=15 Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.424319 4697 generic.go:334] "Generic (PLEG): container finished" podID="c66eb9c1-ad69-4acc-8d3b-82050eee2656" containerID="8958957ef01cd545876a77343a5a6193876d5be3180b9821a963d9f70e169870" exitCode=0 Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.424370 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" event={"ID":"c66eb9c1-ad69-4acc-8d3b-82050eee2656","Type":"ContainerDied","Data":"8958957ef01cd545876a77343a5a6193876d5be3180b9821a963d9f70e169870"} Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.610357 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.812462 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-serving-cert\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.812858 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-dir\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.812888 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-trusted-ca-bundle\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.812924 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.812950 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-provider-selection\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.812991 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-login\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813047 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-error\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813088 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-router-certs\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813115 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-service-ca\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813147 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-cliconfig\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813179 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-policies\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813208 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-ocp-branding-template\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813233 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-idp-0-file-data\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813256 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-session\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813280 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzxwb\" (UniqueName: \"kubernetes.io/projected/c66eb9c1-ad69-4acc-8d3b-82050eee2656-kube-api-access-mzxwb\") pod \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\" (UID: \"c66eb9c1-ad69-4acc-8d3b-82050eee2656\") " Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813548 4697 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.813826 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.814315 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.814824 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.815289 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.882733 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-586d5b9769-lbpdf"] Jan 26 00:10:56 crc kubenswrapper[4697]: E0126 00:10:56.883042 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66eb9c1-ad69-4acc-8d3b-82050eee2656" containerName="oauth-openshift" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883058 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66eb9c1-ad69-4acc-8d3b-82050eee2656" containerName="oauth-openshift" Jan 26 00:10:56 crc kubenswrapper[4697]: E0126 00:10:56.883088 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa" containerName="image-pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883098 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa" containerName="image-pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: E0126 00:10:56.883112 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a35080-724c-4ed9-ad2b-19fd6e14b3bf" containerName="pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883119 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a35080-724c-4ed9-ad2b-19fd6e14b3bf" containerName="pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: E0126 00:10:56.883127 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958f105a-614b-436c-ae45-9b7844a8ec5d" containerName="pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883135 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="958f105a-614b-436c-ae45-9b7844a8ec5d" containerName="pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883296 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66eb9c1-ad69-4acc-8d3b-82050eee2656" containerName="oauth-openshift" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883313 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a35080-724c-4ed9-ad2b-19fd6e14b3bf" containerName="pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883330 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d81d11c-a79e-4ec1-9fdb-7be185e4b2fa" containerName="image-pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883341 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="958f105a-614b-436c-ae45-9b7844a8ec5d" containerName="pruner" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883753 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-586d5b9769-lbpdf"] Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.883851 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.937171 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.939648 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.939787 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:56 crc kubenswrapper[4697]: I0126 00:10:56.939857 4697 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c66eb9c1-ad69-4acc-8d3b-82050eee2656-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.007313 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.012547 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66eb9c1-ad69-4acc-8d3b-82050eee2656-kube-api-access-mzxwb" (OuterVolumeSpecName: "kube-api-access-mzxwb") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "kube-api-access-mzxwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.012715 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055135 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-cliconfig\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055197 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0ac8fa9-d85e-44d7-83f1-2118290c4013-audit-dir\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055225 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055259 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055318 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-service-ca\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055392 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-audit-policies\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055414 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-session\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055455 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055479 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-error\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055506 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzbgv\" (UniqueName: \"kubernetes.io/projected/b0ac8fa9-d85e-44d7-83f1-2118290c4013-kube-api-access-pzbgv\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055529 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-serving-cert\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055551 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-login\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055576 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055621 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-router-certs\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055680 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055696 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzxwb\" (UniqueName: \"kubernetes.io/projected/c66eb9c1-ad69-4acc-8d3b-82050eee2656-kube-api-access-mzxwb\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.055709 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.118113 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.118263 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.118517 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.118742 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.121690 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.122218 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c66eb9c1-ad69-4acc-8d3b-82050eee2656" (UID: "c66eb9c1-ad69-4acc-8d3b-82050eee2656"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.164785 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.164841 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-router-certs\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.164866 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-cliconfig\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.164887 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0ac8fa9-d85e-44d7-83f1-2118290c4013-audit-dir\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.164908 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.164935 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.164985 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-service-ca\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165048 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-audit-policies\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165065 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-session\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165097 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165118 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-error\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165158 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzbgv\" (UniqueName: \"kubernetes.io/projected/b0ac8fa9-d85e-44d7-83f1-2118290c4013-kube-api-access-pzbgv\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165183 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-serving-cert\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165204 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-login\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165246 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165256 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165268 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165312 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165381 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.165621 4697 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c66eb9c1-ad69-4acc-8d3b-82050eee2656-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.166371 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-service-ca\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.169021 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-login\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.169523 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-audit-policies\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.171975 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-session\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.175644 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.204283 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b0ac8fa9-d85e-44d7-83f1-2118290c4013-audit-dir\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.205427 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.206430 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.206902 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-cliconfig\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.221225 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-error\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.221619 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.221703 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-router-certs\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.224000 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzbgv\" (UniqueName: \"kubernetes.io/projected/b0ac8fa9-d85e-44d7-83f1-2118290c4013-kube-api-access-pzbgv\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.248610 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ac8fa9-d85e-44d7-83f1-2118290c4013-v4-0-config-system-serving-cert\") pod \"oauth-openshift-586d5b9769-lbpdf\" (UID: \"b0ac8fa9-d85e-44d7-83f1-2118290c4013\") " pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.309658 4697 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.310259 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312657 4697 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312679 4697 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:10:57 crc kubenswrapper[4697]: E0126 00:10:57.312780 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312790 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 00:10:57 crc kubenswrapper[4697]: E0126 00:10:57.312797 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312802 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 00:10:57 crc kubenswrapper[4697]: E0126 00:10:57.312815 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312821 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 00:10:57 crc kubenswrapper[4697]: E0126 00:10:57.312827 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312833 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 00:10:57 crc kubenswrapper[4697]: E0126 00:10:57.312839 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312844 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:10:57 crc kubenswrapper[4697]: E0126 00:10:57.312852 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312858 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 00:10:57 crc kubenswrapper[4697]: E0126 00:10:57.312867 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312872 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312975 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312989 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.312998 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.313005 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.313011 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.313018 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.314192 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241" gracePeriod=15 Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.314313 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213" gracePeriod=15 Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.314368 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d" gracePeriod=15 Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.314398 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424" gracePeriod=15 Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.314427 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa" gracePeriod=15 Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.433986 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerStarted","Data":"99b70f9f24e6b59c5779af5c5c93b1726e29cca28480244c3a2ab4217949f679"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.437549 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82kcj" event={"ID":"dcfffbc4-4576-4314-b1a2-b990bd8dfa28","Type":"ContainerStarted","Data":"0c4434458cbeb2495b9a3bf1dd8ec7ed5e7177302f27efe8107ca8e584b02462"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.441503 4697 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.480361 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.482772 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" event={"ID":"c66eb9c1-ad69-4acc-8d3b-82050eee2656","Type":"ContainerDied","Data":"20e5308dfd1d5f4d4234a6492e308bfb32b098ea32a1b07f6b465107edd0a8c7"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.482815 4697 scope.go:117] "RemoveContainer" containerID="8958957ef01cd545876a77343a5a6193876d5be3180b9821a963d9f70e169870" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.482941 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9dqv8" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490185 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490225 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490244 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490284 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490335 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490352 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490381 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.490399 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616529 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616594 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616634 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616702 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616708 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616651 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616780 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616808 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616850 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616967 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.616999 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.617087 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.617229 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.617238 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.617034 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.617290 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.684529 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7lfb" event={"ID":"c086f88d-6f74-44d5-9728-f59ebcec3dce","Type":"ContainerStarted","Data":"dc21a5d3ed33e83118ad75e242a3f4c7bdd0c2af86922cec5fe63536dab0774a"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.694963 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f66" event={"ID":"9cfec86d-c03e-4a9d-8571-a233cba73af1","Type":"ContainerStarted","Data":"7d7cb96a8e5917cc78bc0d76ddca19cd78c69b627c7a98cfd93f5e39f590fce1"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.701681 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxm8l" event={"ID":"52a87d4f-2b9b-44c1-9457-cedcf68d8819","Type":"ContainerStarted","Data":"2ae4ebb3471bded4d80b7cd81d8aacf1bb8f20359d9227cea5d27d17e3b28f0f"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.703928 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xk" event={"ID":"8355e146-dafa-45db-85a5-b1534eeb6b53","Type":"ContainerStarted","Data":"440b183ef7678c930e450e0a239aff77833099119a69716925bda97d7552ab39"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.705837 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wklhb" event={"ID":"d7db0326-548c-4c19-86c3-15af398d39cb","Type":"ContainerStarted","Data":"0b4060f259824a1321bca117f98ee0203ba3ce2d81318c3d1b8251e82b7f85b9"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.707431 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerStarted","Data":"9d5c261753be656f9e0748b4384942a591968edaf26925e897b7e4ae7207f3a4"} Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.735275 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.740195 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.795283 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9dqv8"] Jan 26 00:10:57 crc kubenswrapper[4697]: I0126 00:10:57.798772 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9dqv8"] Jan 26 00:10:58 crc kubenswrapper[4697]: E0126 00:10:58.391338 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f6d69d113a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,LastTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:10:58 crc kubenswrapper[4697]: I0126 00:10:58.668694 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66eb9c1-ad69-4acc-8d3b-82050eee2656" path="/var/lib/kubelet/pods/c66eb9c1-ad69-4acc-8d3b-82050eee2656/volumes" Jan 26 00:10:58 crc kubenswrapper[4697]: E0126 00:10:58.682454 4697 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" volumeName="registry-storage" Jan 26 00:10:58 crc kubenswrapper[4697]: I0126 00:10:58.714433 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 00:10:58 crc kubenswrapper[4697]: I0126 00:10:58.721392 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:10:58 crc kubenswrapper[4697]: I0126 00:10:58.726104 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa" exitCode=2 Jan 26 00:10:58 crc kubenswrapper[4697]: I0126 00:10:58.733143 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8c9dcca3db22d799b78e25fcd65570c667d06d659bac8525ffecc95ca2f35b0e"} Jan 26 00:10:59 crc kubenswrapper[4697]: I0126 00:10:59.763481 4697 generic.go:334] "Generic (PLEG): container finished" podID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerID="dc21a5d3ed33e83118ad75e242a3f4c7bdd0c2af86922cec5fe63536dab0774a" exitCode=0 Jan 26 00:10:59 crc kubenswrapper[4697]: I0126 00:10:59.763558 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7lfb" event={"ID":"c086f88d-6f74-44d5-9728-f59ebcec3dce","Type":"ContainerDied","Data":"dc21a5d3ed33e83118ad75e242a3f4c7bdd0c2af86922cec5fe63536dab0774a"} Jan 26 00:10:59 crc kubenswrapper[4697]: I0126 00:10:59.764666 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:10:59 crc kubenswrapper[4697]: I0126 00:10:59.767302 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 00:10:59 crc kubenswrapper[4697]: I0126 00:10:59.769172 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:10:59 crc kubenswrapper[4697]: I0126 00:10:59.769911 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d" exitCode=0 Jan 26 00:11:00 crc kubenswrapper[4697]: E0126 00:11:00.327669 4697 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 26 00:11:00 crc kubenswrapper[4697]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01" Netns:"/var/run/netns/22fde1ea-546d-4e6b-b2b5-b2dfb33a99cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Jan 26 00:11:00 crc kubenswrapper[4697]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 00:11:00 crc kubenswrapper[4697]: > Jan 26 00:11:00 crc kubenswrapper[4697]: E0126 00:11:00.328027 4697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 26 00:11:00 crc kubenswrapper[4697]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01" Netns:"/var/run/netns/22fde1ea-546d-4e6b-b2b5-b2dfb33a99cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Jan 26 00:11:00 crc kubenswrapper[4697]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 00:11:00 crc kubenswrapper[4697]: > pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:00 crc kubenswrapper[4697]: E0126 00:11:00.328053 4697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 26 00:11:00 crc kubenswrapper[4697]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01" Netns:"/var/run/netns/22fde1ea-546d-4e6b-b2b5-b2dfb33a99cb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Jan 26 00:11:00 crc kubenswrapper[4697]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 00:11:00 crc kubenswrapper[4697]: > pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:00 crc kubenswrapper[4697]: E0126 00:11:00.328163 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01\\\" Netns:\\\"/var/run/netns/22fde1ea-546d-4e6b-b2b5-b2dfb33a99cb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=e750f181956060dd7e6224a8ad110db98bfc9bfd7f915c6725b0a645a1d9ee01;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s\\\": dial tcp 38.102.83.150:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:11:00 crc kubenswrapper[4697]: E0126 00:11:00.352518 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f6d69d113a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,LastTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.776867 4697 generic.go:334] "Generic (PLEG): container finished" podID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerID="9d5c261753be656f9e0748b4384942a591968edaf26925e897b7e4ae7207f3a4" exitCode=0 Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.776946 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerDied","Data":"9d5c261753be656f9e0748b4384942a591968edaf26925e897b7e4ae7207f3a4"} Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.777981 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.778724 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.781276 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.783546 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.785896 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213" exitCode=0 Jan 26 00:11:00 crc kubenswrapper[4697]: I0126 00:11:00.785981 4697 scope.go:117] "RemoveContainer" containerID="173095e9414515079d858942abcce9d6a955ec0ae596bdbba30b9b40fedd5928" Jan 26 00:11:01 crc kubenswrapper[4697]: E0126 00:11:01.490687 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1179648738},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: E0126 00:11:01.491713 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: E0126 00:11:01.491909 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: E0126 00:11:01.492053 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: E0126 00:11:01.492242 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: E0126 00:11:01.492255 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.793394 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2f51327-b54f-430d-8728-302b40279d68" containerID="99b70f9f24e6b59c5779af5c5c93b1726e29cca28480244c3a2ab4217949f679" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.793475 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerDied","Data":"99b70f9f24e6b59c5779af5c5c93b1726e29cca28480244c3a2ab4217949f679"} Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.794470 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.795143 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.795547 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.796670 4697 generic.go:334] "Generic (PLEG): container finished" podID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerID="2ae4ebb3471bded4d80b7cd81d8aacf1bb8f20359d9227cea5d27d17e3b28f0f" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.796728 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxm8l" event={"ID":"52a87d4f-2b9b-44c1-9457-cedcf68d8819","Type":"ContainerDied","Data":"2ae4ebb3471bded4d80b7cd81d8aacf1bb8f20359d9227cea5d27d17e3b28f0f"} Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.797399 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.798042 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.798263 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.798451 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.799729 4697 generic.go:334] "Generic (PLEG): container finished" podID="eeb0379c-8f6d-4151-8552-d891cf28c05b" containerID="788854723e67614f3debe570b72222d9a92c3bd1487f6979e7722312551238d0" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.799787 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eeb0379c-8f6d-4151-8552-d891cf28c05b","Type":"ContainerDied","Data":"788854723e67614f3debe570b72222d9a92c3bd1487f6979e7722312551238d0"} Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.800240 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.800496 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.800991 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.801335 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.801554 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.801678 4697 generic.go:334] "Generic (PLEG): container finished" podID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerID="0c4434458cbeb2495b9a3bf1dd8ec7ed5e7177302f27efe8107ca8e584b02462" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.801710 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82kcj" event={"ID":"dcfffbc4-4576-4314-b1a2-b990bd8dfa28","Type":"ContainerDied","Data":"0c4434458cbeb2495b9a3bf1dd8ec7ed5e7177302f27efe8107ca8e584b02462"} Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.802174 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.802401 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.802593 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.802758 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.802924 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.803132 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.803579 4697 generic.go:334] "Generic (PLEG): container finished" podID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerID="440b183ef7678c930e450e0a239aff77833099119a69716925bda97d7552ab39" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.803625 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xk" event={"ID":"8355e146-dafa-45db-85a5-b1534eeb6b53","Type":"ContainerDied","Data":"440b183ef7678c930e450e0a239aff77833099119a69716925bda97d7552ab39"} Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.804051 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.804265 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.804702 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.804907 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.805149 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.805352 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.805614 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.807404 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.807983 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.809500 4697 generic.go:334] "Generic (PLEG): container finished" podID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerID="7d7cb96a8e5917cc78bc0d76ddca19cd78c69b627c7a98cfd93f5e39f590fce1" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.809545 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f66" event={"ID":"9cfec86d-c03e-4a9d-8571-a233cba73af1","Type":"ContainerDied","Data":"7d7cb96a8e5917cc78bc0d76ddca19cd78c69b627c7a98cfd93f5e39f590fce1"} Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.810284 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.810574 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.810758 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.810989 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.811036 4697 generic.go:334] "Generic (PLEG): container finished" podID="d7db0326-548c-4c19-86c3-15af398d39cb" containerID="0b4060f259824a1321bca117f98ee0203ba3ce2d81318c3d1b8251e82b7f85b9" exitCode=0 Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.811053 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wklhb" event={"ID":"d7db0326-548c-4c19-86c3-15af398d39cb","Type":"ContainerDied","Data":"0b4060f259824a1321bca117f98ee0203ba3ce2d81318c3d1b8251e82b7f85b9"} Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.811226 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.811443 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.811634 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.811777 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.811995 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.812206 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.812378 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.812541 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.812690 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.812824 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.812967 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.813192 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:01 crc kubenswrapper[4697]: I0126 00:11:01.813451 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[4697]: E0126 00:11:02.934942 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[4697]: E0126 00:11:02.935559 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[4697]: E0126 00:11:02.936778 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[4697]: E0126 00:11:02.937437 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[4697]: E0126 00:11:02.937609 4697 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:02 crc kubenswrapper[4697]: I0126 00:11:02.937627 4697 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 00:11:02 crc kubenswrapper[4697]: E0126 00:11:02.937773 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.042228 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.042807 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.043242 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.043578 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.043897 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.044178 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.044365 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.044577 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.044836 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.045048 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.101776 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-var-lock\") pod \"eeb0379c-8f6d-4151-8552-d891cf28c05b\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.101864 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-kubelet-dir\") pod \"eeb0379c-8f6d-4151-8552-d891cf28c05b\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.101894 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-var-lock" (OuterVolumeSpecName: "var-lock") pod "eeb0379c-8f6d-4151-8552-d891cf28c05b" (UID: "eeb0379c-8f6d-4151-8552-d891cf28c05b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.101928 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eeb0379c-8f6d-4151-8552-d891cf28c05b-kube-api-access\") pod \"eeb0379c-8f6d-4151-8552-d891cf28c05b\" (UID: \"eeb0379c-8f6d-4151-8552-d891cf28c05b\") " Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.101934 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eeb0379c-8f6d-4151-8552-d891cf28c05b" (UID: "eeb0379c-8f6d-4151-8552-d891cf28c05b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.102218 4697 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.102234 4697 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eeb0379c-8f6d-4151-8552-d891cf28c05b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.106713 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb0379c-8f6d-4151-8552-d891cf28c05b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eeb0379c-8f6d-4151-8552-d891cf28c05b" (UID: "eeb0379c-8f6d-4151-8552-d891cf28c05b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:11:03 crc kubenswrapper[4697]: E0126 00:11:03.138438 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.202731 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eeb0379c-8f6d-4151-8552-d891cf28c05b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:03 crc kubenswrapper[4697]: E0126 00:11:03.538787 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.824327 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.824344 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"eeb0379c-8f6d-4151-8552-d891cf28c05b","Type":"ContainerDied","Data":"47fcc26a0a9bbe5315ece35757fc4d23760ea2771db61f9fed4d11895dd68471"} Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.824680 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47fcc26a0a9bbe5315ece35757fc4d23760ea2771db61f9fed4d11895dd68471" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.827376 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf"} Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.835750 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.836190 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.836582 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.836883 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.837145 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.837406 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.837673 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.837944 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:03 crc kubenswrapper[4697]: I0126 00:11:03.838232 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:04 crc kubenswrapper[4697]: E0126 00:11:04.339829 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Jan 26 00:11:04 crc kubenswrapper[4697]: I0126 00:11:04.836536 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:05 crc kubenswrapper[4697]: I0126 00:11:05.363511 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241" exitCode=0 Jan 26 00:11:05 crc kubenswrapper[4697]: E0126 00:11:05.941039 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.329422 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.329510 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.329573 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.330450 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.330512 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98" gracePeriod=600 Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.371885 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.374191 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.374688 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.374992 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.375325 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.375680 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.375976 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.376285 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.376606 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.376893 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:06 crc kubenswrapper[4697]: I0126 00:11:06.377196 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.093045 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.094121 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.094692 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.095122 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.095748 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.096155 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.096507 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.096996 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.097319 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.097568 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.097870 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.098337 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.098909 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253184 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253282 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253308 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253382 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253426 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253455 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253832 4697 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253860 4697 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.253871 4697 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.384727 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.385726 4697 scope.go:117] "RemoveContainer" containerID="5f07c8a0c4ee5e3c33d5274f167030de7da0834e51a105a1c6bb2ebea10db213" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.385873 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.390919 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98" exitCode=0 Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.390975 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98"} Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.403355 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.404845 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.405364 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.405695 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.406179 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.406471 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.406760 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.407159 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.407484 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.407741 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:07 crc kubenswrapper[4697]: I0126 00:11:07.407983 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.662848 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.664382 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.664800 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.665871 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.666292 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.666632 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.666938 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.667156 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.667163 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.667647 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.668782 4697 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:08 crc kubenswrapper[4697]: I0126 00:11:08.669513 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:09 crc kubenswrapper[4697]: E0126 00:11:09.142818 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Jan 26 00:11:10 crc kubenswrapper[4697]: E0126 00:11:10.353369 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f6d69d113a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,LastTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:11:10 crc kubenswrapper[4697]: I0126 00:11:10.527537 4697 scope.go:117] "RemoveContainer" containerID="d372f38c6900977f7e19c1c1af7a0dbddfb40c9c7b44607d9a0f69f571cc135d" Jan 26 00:11:11 crc kubenswrapper[4697]: E0126 00:11:11.503105 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T00:11:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[],\\\"sizeBytes\\\":1179648738},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:11 crc kubenswrapper[4697]: E0126 00:11:11.504551 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:11 crc kubenswrapper[4697]: E0126 00:11:11.504798 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:11 crc kubenswrapper[4697]: E0126 00:11:11.504994 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:11 crc kubenswrapper[4697]: E0126 00:11:11.505232 4697 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:11 crc kubenswrapper[4697]: E0126 00:11:11.505255 4697 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 00:11:11 crc kubenswrapper[4697]: I0126 00:11:11.508960 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.660824 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.662032 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.663441 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.663791 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.664085 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.665540 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.665788 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.666148 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.666418 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.666687 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.666975 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.680436 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.680619 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:12 crc kubenswrapper[4697]: E0126 00:11:12.681228 4697 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:12 crc kubenswrapper[4697]: I0126 00:11:12.681703 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:13 crc kubenswrapper[4697]: I0126 00:11:13.423027 4697 scope.go:117] "RemoveContainer" containerID="2adb19c1ce6cbec06357109bcb49be47eaad9048f2704701a3afe7e113765424" Jan 26 00:11:13 crc kubenswrapper[4697]: I0126 00:11:13.523493 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 00:11:13 crc kubenswrapper[4697]: I0126 00:11:13.660065 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:13 crc kubenswrapper[4697]: I0126 00:11:13.660669 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.434137 4697 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.434219 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.533941 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.533983 4697 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163" exitCode=1 Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.534013 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163"} Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.534469 4697 scope.go:117] "RemoveContainer" containerID="014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.534753 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.535106 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.535498 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.535860 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.536295 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.536760 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.536998 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.537260 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.537509 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.537732 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:14 crc kubenswrapper[4697]: I0126 00:11:14.537988 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:15 crc kubenswrapper[4697]: E0126 00:11:15.544138 4697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="7s" Jan 26 00:11:17 crc kubenswrapper[4697]: I0126 00:11:17.146279 4697 scope.go:117] "RemoveContainer" containerID="3bd9a345135a989debc56de7c8476db686728a9be6ca23c48c5b8226bcfa43aa" Jan 26 00:11:17 crc kubenswrapper[4697]: I0126 00:11:17.216799 4697 scope.go:117] "RemoveContainer" containerID="e5e3d987b6ed56d91e2bdc6d119336df3eea299b0fd3fc98ccf5af57372e7241" Jan 26 00:11:17 crc kubenswrapper[4697]: W0126 00:11:17.230224 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-9bd3d1efd52ca1886042eb35ef6df1abf6816934b6a863d194ac47dc300ccc53 WatchSource:0}: Error finding container 9bd3d1efd52ca1886042eb35ef6df1abf6816934b6a863d194ac47dc300ccc53: Status 404 returned error can't find the container with id 9bd3d1efd52ca1886042eb35ef6df1abf6816934b6a863d194ac47dc300ccc53 Jan 26 00:11:17 crc kubenswrapper[4697]: I0126 00:11:17.345269 4697 scope.go:117] "RemoveContainer" containerID="adaf47a67708f185f56a41a118c0ef09ddffee8e65cb0f2601587fad8657dccc" Jan 26 00:11:17 crc kubenswrapper[4697]: I0126 00:11:17.563436 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9bd3d1efd52ca1886042eb35ef6df1abf6816934b6a863d194ac47dc300ccc53"} Jan 26 00:11:17 crc kubenswrapper[4697]: E0126 00:11:17.814967 4697 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 26 00:11:17 crc kubenswrapper[4697]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25" Netns:"/var/run/netns/76d0926c-692c-4e50-a0d7-cbd4ed8faba4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Jan 26 00:11:17 crc kubenswrapper[4697]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 00:11:17 crc kubenswrapper[4697]: > Jan 26 00:11:17 crc kubenswrapper[4697]: E0126 00:11:17.815415 4697 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 26 00:11:17 crc kubenswrapper[4697]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25" Netns:"/var/run/netns/76d0926c-692c-4e50-a0d7-cbd4ed8faba4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Jan 26 00:11:17 crc kubenswrapper[4697]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 00:11:17 crc kubenswrapper[4697]: > pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:17 crc kubenswrapper[4697]: E0126 00:11:17.815443 4697 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 26 00:11:17 crc kubenswrapper[4697]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25" Netns:"/var/run/netns/76d0926c-692c-4e50-a0d7-cbd4ed8faba4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Jan 26 00:11:17 crc kubenswrapper[4697]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 00:11:17 crc kubenswrapper[4697]: > pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:17 crc kubenswrapper[4697]: E0126 00:11:17.815502 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-586d5b9769-lbpdf_openshift-authentication_b0ac8fa9-d85e-44d7-83f1-2118290c4013_0(0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25): error adding pod openshift-authentication_oauth-openshift-586d5b9769-lbpdf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25\\\" Netns:\\\"/var/run/netns/76d0926c-692c-4e50-a0d7-cbd4ed8faba4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-586d5b9769-lbpdf;K8S_POD_INFRA_CONTAINER_ID=0eea56fa74853417fc9116e93bd5134ff699ba21c2b942d6ca8603da56585f25;K8S_POD_UID=b0ac8fa9-d85e-44d7-83f1-2118290c4013\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-586d5b9769-lbpdf] networking: Multus: [openshift-authentication/oauth-openshift-586d5b9769-lbpdf/b0ac8fa9-d85e-44d7-83f1-2118290c4013]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-586d5b9769-lbpdf in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-586d5b9769-lbpdf?timeout=1m0s\\\": dial tcp 38.102.83.150:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.675140 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.675613 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.676206 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.676776 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.677168 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.677748 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.678216 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.678653 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.679009 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.679653 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.680446 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:18 crc kubenswrapper[4697]: I0126 00:11:18.681177 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:19 crc kubenswrapper[4697]: I0126 00:11:19.583372 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"0190f13b753887868a356c904d8c8ead28f75b77c98a9b67db0b95f6c3108511"} Jan 26 00:11:20 crc kubenswrapper[4697]: E0126 00:11:20.354572 4697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e1f6d69d113a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,LastTimestamp:2026-01-26 00:10:58.390389667 +0000 UTC m=+200.027167057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.596976 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerStarted","Data":"e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.598761 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.598979 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.599223 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.599448 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.599799 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.600402 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.600616 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.600886 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.601200 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.601481 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.601781 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.603508 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.604940 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.605041 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b2257fe997abe3e95c8ef54d52d322e19ea06aca7162dd71ba1e6e04aa0e9a68"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.605700 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.606044 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.606699 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.607022 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.607225 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.607473 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.607732 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.607899 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.608103 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.608340 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.608568 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.608729 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.608839 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxm8l" event={"ID":"52a87d4f-2b9b-44c1-9457-cedcf68d8819","Type":"ContainerStarted","Data":"db411f20868abaa79cb5cb4bb1d0a0b698b2497cf0a8c77e83347b9b85f867d5"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.609392 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.609627 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.609860 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.610021 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.610654 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.611072 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.611421 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.611693 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.611989 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.612443 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.612622 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.612656 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82kcj" event={"ID":"dcfffbc4-4576-4314-b1a2-b990bd8dfa28","Type":"ContainerStarted","Data":"208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.612816 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.613147 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.613325 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.613553 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.613796 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.614027 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.614257 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.614481 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.614688 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.614883 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.615126 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.615322 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.615565 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.616407 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xk" event={"ID":"8355e146-dafa-45db-85a5-b1534eeb6b53","Type":"ContainerStarted","Data":"a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.617856 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.618055 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.618417 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.618693 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.619563 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.620166 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.620561 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f66" event={"ID":"9cfec86d-c03e-4a9d-8571-a233cba73af1","Type":"ContainerStarted","Data":"544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.620570 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.620824 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.621180 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.621413 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.621748 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.622031 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.622547 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.622984 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.623264 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerStarted","Data":"efcb39628d70a560941d9f0100c54785197ff4d9085511ddaa57591f9ea4a972"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.623406 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.623731 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.624143 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.625022 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.625839 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7lfb" event={"ID":"c086f88d-6f74-44d5-9728-f59ebcec3dce","Type":"ContainerStarted","Data":"89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.625927 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.626236 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.626447 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.627016 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.628034 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.628637 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.628780 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f2f2b786fc0e8964df3b62579c0551734c9e0eb6f78d6ebadaec95300e9dc7fd"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.628883 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.628905 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.629589 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: E0126 00:11:20.629628 4697 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.629822 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.630112 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.630354 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.630858 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.631148 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.631406 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.631655 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.631921 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.632735 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.633107 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.633332 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.633720 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.633922 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.634095 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.638228 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.638563 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.638799 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.639016 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.639271 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.639540 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.639755 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.639924 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.640162 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.640312 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wklhb" event={"ID":"d7db0326-548c-4c19-86c3-15af398d39cb","Type":"ContainerStarted","Data":"0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93"} Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.641389 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.641621 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mb5j7\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.641826 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.642019 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.642378 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.665407 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.679334 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.689207 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.689651 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.699360 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.719441 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.739692 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.760355 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.779370 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.799060 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.818911 4697 status_manager.go:851] "Failed to get status for pod" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" pod="openshift-marketplace/community-operators-kxm8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kxm8l\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.838657 4697 status_manager.go:851] "Failed to get status for pod" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" pod="openshift-marketplace/certified-operators-p9f66" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-p9f66\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.858641 4697 status_manager.go:851] "Failed to get status for pod" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" pod="openshift-marketplace/redhat-operators-tp2xk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tp2xk\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.878800 4697 status_manager.go:851] "Failed to get status for pod" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" pod="openshift-marketplace/redhat-marketplace-s7lfb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s7lfb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.898299 4697 status_manager.go:851] "Failed to get status for pod" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" pod="openshift-marketplace/redhat-operators-82kcj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-82kcj\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.919002 4697 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.935674 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.935726 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.938520 4697 status_manager.go:851] "Failed to get status for pod" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" pod="openshift-marketplace/redhat-marketplace-xbn5x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xbn5x\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.958851 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2f51327-b54f-430d-8728-302b40279d68" pod="openshift-marketplace/certified-operators-w4m7r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-w4m7r\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.978572 4697 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:20 crc kubenswrapper[4697]: I0126 00:11:20.998494 4697 status_manager.go:851] "Failed to get status for pod" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.018879 4697 status_manager.go:851] "Failed to get status for pod" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mb5j7\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.039148 4697 status_manager.go:851] "Failed to get status for pod" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" pod="openshift-marketplace/community-operators-wklhb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-wklhb\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.058966 4697 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.200776 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.200830 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.648855 4697 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f2f2b786fc0e8964df3b62579c0551734c9e0eb6f78d6ebadaec95300e9dc7fd" exitCode=0 Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.649221 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f2f2b786fc0e8964df3b62579c0551734c9e0eb6f78d6ebadaec95300e9dc7fd"} Jan 26 00:11:21 crc kubenswrapper[4697]: I0126 00:11:21.649265 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f06eac15dd68dae44fe1142bb6e6dea33ca69b99e1a44fc5a15bd7b230e47351"} Jan 26 00:11:22 crc kubenswrapper[4697]: I0126 00:11:22.190227 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-w4m7r" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="registry-server" probeResult="failure" output=< Jan 26 00:11:22 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:11:22 crc kubenswrapper[4697]: > Jan 26 00:11:22 crc kubenswrapper[4697]: I0126 00:11:22.196541 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wklhb" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="registry-server" probeResult="failure" output=< Jan 26 00:11:22 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:11:22 crc kubenswrapper[4697]: > Jan 26 00:11:22 crc kubenswrapper[4697]: I0126 00:11:22.351848 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p9f66" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="registry-server" probeResult="failure" output=< Jan 26 00:11:22 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:11:22 crc kubenswrapper[4697]: > Jan 26 00:11:22 crc kubenswrapper[4697]: I0126 00:11:22.960338 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:11:22 crc kubenswrapper[4697]: I0126 00:11:22.960397 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.201659 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.202011 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.425930 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.743582 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.743626 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.743637 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.744665 4697 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.744706 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:11:23 crc kubenswrapper[4697]: I0126 00:11:23.760746 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fa794ba8f062ff790887761591ce4b579735af6a0be49d208f9b25428782b1c2"} Jan 26 00:11:24 crc kubenswrapper[4697]: I0126 00:11:24.087858 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:11:24 crc kubenswrapper[4697]: I0126 00:11:24.087993 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:11:24 crc kubenswrapper[4697]: I0126 00:11:24.338683 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-xbn5x" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="registry-server" probeResult="failure" output=< Jan 26 00:11:24 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:11:24 crc kubenswrapper[4697]: > Jan 26 00:11:24 crc kubenswrapper[4697]: I0126 00:11:24.434605 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:11:24 crc kubenswrapper[4697]: I0126 00:11:24.769674 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"60a124aa406246d14e67dd9aafa6dfb0676482238918a6e9d41da3603fb1364c"} Jan 26 00:11:24 crc kubenswrapper[4697]: I0126 00:11:24.770011 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0269b3d102b36c85fe3deaa83fcbc646121f2a25d102a04f15e248823e0919e7"} Jan 26 00:11:24 crc kubenswrapper[4697]: I0126 00:11:24.974208 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tp2xk" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="registry-server" probeResult="failure" output=< Jan 26 00:11:24 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:11:24 crc kubenswrapper[4697]: > Jan 26 00:11:25 crc kubenswrapper[4697]: I0126 00:11:25.238264 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82kcj" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="registry-server" probeResult="failure" output=< Jan 26 00:11:25 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:11:25 crc kubenswrapper[4697]: > Jan 26 00:11:25 crc kubenswrapper[4697]: I0126 00:11:25.794025 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"94f71deaae77c4762d995fe425832b57e214280e898f9c19b21e26101c1ce6f3"} Jan 26 00:11:25 crc kubenswrapper[4697]: I0126 00:11:25.794299 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:25 crc kubenswrapper[4697]: I0126 00:11:25.794446 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:25 crc kubenswrapper[4697]: I0126 00:11:25.794480 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:27 crc kubenswrapper[4697]: I0126 00:11:27.682662 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:27 crc kubenswrapper[4697]: I0126 00:11:27.682761 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:27 crc kubenswrapper[4697]: I0126 00:11:27.692211 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.467672 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.470291 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.577945 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.738613 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.739114 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.834660 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.889334 4697 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.989302 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:11:30 crc kubenswrapper[4697]: I0126 00:11:30.990427 4697 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6629c426-23b9-4e45-a7fa-23f119dcbde7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:20Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T00:11:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f2b786fc0e8964df3b62579c0551734c9e0eb6f78d6ebadaec95300e9dc7fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2f2b786fc0e8964df3b62579c0551734c9e0eb6f78d6ebadaec95300e9dc7fd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T00:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T00:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Pod \"kube-apiserver-crc\" is invalid: metadata.uid: Invalid value: \"6629c426-23b9-4e45-a7fa-23f119dcbde7\": field is immutable" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.001829 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.255729 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.285258 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.312639 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.590620 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.829990 4697 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c5772ee0-90a5-46fa-a18b-4a8a42242d88" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.863905 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" event={"ID":"b0ac8fa9-d85e-44d7-83f1-2118290c4013","Type":"ContainerStarted","Data":"c1d4021358034ea458c456029829b31b551327e2e8450d6d7bf57501bc837120"} Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.864581 4697 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.864623 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6629c426-23b9-4e45-a7fa-23f119dcbde7" Jan 26 00:11:31 crc kubenswrapper[4697]: I0126 00:11:31.900820 4697 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c5772ee0-90a5-46fa-a18b-4a8a42242d88" Jan 26 00:11:32 crc kubenswrapper[4697]: I0126 00:11:32.870546 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/0.log" Jan 26 00:11:32 crc kubenswrapper[4697]: I0126 00:11:32.870894 4697 generic.go:334] "Generic (PLEG): container finished" podID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" containerID="f17d37a413b0722dee23f13499000b9abec535a72b15f676cec2ca79a6605a97" exitCode=255 Jan 26 00:11:32 crc kubenswrapper[4697]: I0126 00:11:32.870933 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" event={"ID":"b0ac8fa9-d85e-44d7-83f1-2118290c4013","Type":"ContainerDied","Data":"f17d37a413b0722dee23f13499000b9abec535a72b15f676cec2ca79a6605a97"} Jan 26 00:11:32 crc kubenswrapper[4697]: I0126 00:11:32.871546 4697 scope.go:117] "RemoveContainer" containerID="f17d37a413b0722dee23f13499000b9abec535a72b15f676cec2ca79a6605a97" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.029971 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.075123 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.180455 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.719574 4697 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.719633 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.771407 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.813605 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.880175 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/0.log" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.880302 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" event={"ID":"b0ac8fa9-d85e-44d7-83f1-2118290c4013","Type":"ContainerStarted","Data":"32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe"} Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.880817 4697 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" containerID="cri-o://f17d37a413b0722dee23f13499000b9abec535a72b15f676cec2ca79a6605a97" Jan 26 00:11:33 crc kubenswrapper[4697]: I0126 00:11:33.880842 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.171869 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.220597 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.890093 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/1.log" Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.891440 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/0.log" Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.891712 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" event={"ID":"b0ac8fa9-d85e-44d7-83f1-2118290c4013","Type":"ContainerDied","Data":"32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe"} Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.891792 4697 scope.go:117] "RemoveContainer" containerID="f17d37a413b0722dee23f13499000b9abec535a72b15f676cec2ca79a6605a97" Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.891542 4697 generic.go:334] "Generic (PLEG): container finished" podID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" containerID="32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe" exitCode=255 Jan 26 00:11:34 crc kubenswrapper[4697]: I0126 00:11:34.892334 4697 scope.go:117] "RemoveContainer" containerID="32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe" Jan 26 00:11:34 crc kubenswrapper[4697]: E0126 00:11:34.892747 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:11:35 crc kubenswrapper[4697]: I0126 00:11:35.913392 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/1.log" Jan 26 00:11:35 crc kubenswrapper[4697]: I0126 00:11:35.914521 4697 scope.go:117] "RemoveContainer" containerID="32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe" Jan 26 00:11:35 crc kubenswrapper[4697]: E0126 00:11:35.914822 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:11:37 crc kubenswrapper[4697]: I0126 00:11:37.480880 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:37 crc kubenswrapper[4697]: I0126 00:11:37.480924 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:37 crc kubenswrapper[4697]: I0126 00:11:37.481384 4697 scope.go:117] "RemoveContainer" containerID="32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe" Jan 26 00:11:37 crc kubenswrapper[4697]: E0126 00:11:37.481579 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:11:43 crc kubenswrapper[4697]: I0126 00:11:43.720408 4697 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 00:11:43 crc kubenswrapper[4697]: I0126 00:11:43.721975 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 00:11:43 crc kubenswrapper[4697]: I0126 00:11:43.722060 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:11:43 crc kubenswrapper[4697]: I0126 00:11:43.723216 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"b2257fe997abe3e95c8ef54d52d322e19ea06aca7162dd71ba1e6e04aa0e9a68"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 26 00:11:43 crc kubenswrapper[4697]: I0126 00:11:43.723363 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://b2257fe997abe3e95c8ef54d52d322e19ea06aca7162dd71ba1e6e04aa0e9a68" gracePeriod=30 Jan 26 00:11:43 crc kubenswrapper[4697]: I0126 00:11:43.950081 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 00:11:45 crc kubenswrapper[4697]: I0126 00:11:45.112113 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 00:11:45 crc kubenswrapper[4697]: I0126 00:11:45.859549 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 00:11:46 crc kubenswrapper[4697]: I0126 00:11:46.257237 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 00:11:49 crc kubenswrapper[4697]: I0126 00:11:49.787441 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 00:11:51 crc kubenswrapper[4697]: I0126 00:11:51.475659 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 00:11:51 crc kubenswrapper[4697]: I0126 00:11:51.662502 4697 scope.go:117] "RemoveContainer" containerID="32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe" Jan 26 00:11:52 crc kubenswrapper[4697]: I0126 00:11:52.030746 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/1.log" Jan 26 00:11:52 crc kubenswrapper[4697]: I0126 00:11:52.031591 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" event={"ID":"b0ac8fa9-d85e-44d7-83f1-2118290c4013","Type":"ContainerStarted","Data":"d3a3c8f8253053866f3abfbf6e871555d3b07a51801eb6d2fbdb222143870154"} Jan 26 00:11:52 crc kubenswrapper[4697]: I0126 00:11:52.033920 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:52 crc kubenswrapper[4697]: I0126 00:11:52.035528 4697 patch_prober.go:28] interesting pod/oauth-openshift-586d5b9769-lbpdf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" start-of-body= Jan 26 00:11:52 crc kubenswrapper[4697]: I0126 00:11:52.035673 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[4697]: I0126 00:11:53.039907 4697 patch_prober.go:28] interesting pod/oauth-openshift-586d5b9769-lbpdf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" start-of-body= Jan 26 00:11:53 crc kubenswrapper[4697]: I0126 00:11:53.040562 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": dial tcp 10.217.0.57:6443: connect: connection refused" Jan 26 00:11:53 crc kubenswrapper[4697]: I0126 00:11:53.293186 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.051283 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/2.log" Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.052549 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/1.log" Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.052615 4697 generic.go:334] "Generic (PLEG): container finished" podID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" containerID="d3a3c8f8253053866f3abfbf6e871555d3b07a51801eb6d2fbdb222143870154" exitCode=255 Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.052661 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" event={"ID":"b0ac8fa9-d85e-44d7-83f1-2118290c4013","Type":"ContainerDied","Data":"d3a3c8f8253053866f3abfbf6e871555d3b07a51801eb6d2fbdb222143870154"} Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.052722 4697 scope.go:117] "RemoveContainer" containerID="32a085a2ca21948fdbb9cd25a2fca259758104a65a25dd92ffd70a2b727edefe" Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.053914 4697 scope.go:117] "RemoveContainer" containerID="d3a3c8f8253053866f3abfbf6e871555d3b07a51801eb6d2fbdb222143870154" Jan 26 00:11:54 crc kubenswrapper[4697]: E0126 00:11:54.054382 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.072490 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.135680 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 00:11:54 crc kubenswrapper[4697]: I0126 00:11:54.206355 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 00:11:55 crc kubenswrapper[4697]: I0126 00:11:55.061233 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/2.log" Jan 26 00:11:55 crc kubenswrapper[4697]: I0126 00:11:55.436744 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 00:11:56 crc kubenswrapper[4697]: I0126 00:11:56.724575 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 00:11:57 crc kubenswrapper[4697]: I0126 00:11:57.481856 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:11:57 crc kubenswrapper[4697]: I0126 00:11:57.482616 4697 scope.go:117] "RemoveContainer" containerID="d3a3c8f8253053866f3abfbf6e871555d3b07a51801eb6d2fbdb222143870154" Jan 26 00:11:57 crc kubenswrapper[4697]: E0126 00:11:57.482969 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:11:57 crc kubenswrapper[4697]: I0126 00:11:57.955472 4697 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 00:11:59 crc kubenswrapper[4697]: I0126 00:11:59.075807 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 00:11:59 crc kubenswrapper[4697]: I0126 00:11:59.809955 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 00:12:00 crc kubenswrapper[4697]: I0126 00:12:00.372869 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 00:12:00 crc kubenswrapper[4697]: I0126 00:12:00.658512 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 00:12:00 crc kubenswrapper[4697]: I0126 00:12:00.876776 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 00:12:01 crc kubenswrapper[4697]: I0126 00:12:01.219849 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 00:12:01 crc kubenswrapper[4697]: I0126 00:12:01.368312 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 00:12:01 crc kubenswrapper[4697]: I0126 00:12:01.584940 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 00:12:01 crc kubenswrapper[4697]: I0126 00:12:01.721705 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 00:12:01 crc kubenswrapper[4697]: I0126 00:12:01.977479 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 00:12:01 crc kubenswrapper[4697]: I0126 00:12:01.999444 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 00:12:01 crc kubenswrapper[4697]: I0126 00:12:01.999673 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 00:12:02 crc kubenswrapper[4697]: I0126 00:12:02.158689 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 00:12:02 crc kubenswrapper[4697]: I0126 00:12:02.908135 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 00:12:03 crc kubenswrapper[4697]: I0126 00:12:03.285402 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 00:12:03 crc kubenswrapper[4697]: I0126 00:12:03.740779 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 00:12:03 crc kubenswrapper[4697]: I0126 00:12:03.827017 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 00:12:04 crc kubenswrapper[4697]: I0126 00:12:04.128962 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 00:12:04 crc kubenswrapper[4697]: I0126 00:12:04.331136 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 00:12:04 crc kubenswrapper[4697]: I0126 00:12:04.641636 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 00:12:04 crc kubenswrapper[4697]: I0126 00:12:04.669977 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.043155 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.063173 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.100429 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.113694 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.523397 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.653543 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.720768 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 00:12:05 crc kubenswrapper[4697]: I0126 00:12:05.903646 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.030710 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.071649 4697 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.254697 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.333815 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.392906 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.461410 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.869709 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.932528 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 00:12:06 crc kubenswrapper[4697]: I0126 00:12:06.950912 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 00:12:07 crc kubenswrapper[4697]: I0126 00:12:07.277142 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 00:12:07 crc kubenswrapper[4697]: I0126 00:12:07.351709 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 00:12:07 crc kubenswrapper[4697]: I0126 00:12:07.459203 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 00:12:07 crc kubenswrapper[4697]: I0126 00:12:07.665236 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 00:12:07 crc kubenswrapper[4697]: I0126 00:12:07.863047 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 00:12:07 crc kubenswrapper[4697]: I0126 00:12:07.913056 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 00:12:07 crc kubenswrapper[4697]: I0126 00:12:07.950144 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.020464 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.158218 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.172563 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.591981 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.668414 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.886515 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.959874 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 00:12:08 crc kubenswrapper[4697]: I0126 00:12:08.967325 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.248871 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.283977 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.499120 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.627000 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.660983 4697 scope.go:117] "RemoveContainer" containerID="d3a3c8f8253053866f3abfbf6e871555d3b07a51801eb6d2fbdb222143870154" Jan 26 00:12:09 crc kubenswrapper[4697]: E0126 00:12:09.661505 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-586d5b9769-lbpdf_openshift-authentication(b0ac8fa9-d85e-44d7-83f1-2118290c4013)\"" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podUID="b0ac8fa9-d85e-44d7-83f1-2118290c4013" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.837000 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.880874 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 00:12:09 crc kubenswrapper[4697]: I0126 00:12:09.941186 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.019266 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.036694 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.116529 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.138355 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.282497 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.375518 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.400660 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.449267 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.540876 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 00:12:10 crc kubenswrapper[4697]: I0126 00:12:10.802571 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 00:12:11 crc kubenswrapper[4697]: I0126 00:12:11.113234 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 00:12:11 crc kubenswrapper[4697]: I0126 00:12:11.237862 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 00:12:11 crc kubenswrapper[4697]: I0126 00:12:11.497508 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 00:12:11 crc kubenswrapper[4697]: I0126 00:12:11.523926 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 00:12:11 crc kubenswrapper[4697]: I0126 00:12:11.587129 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 00:12:11 crc kubenswrapper[4697]: I0126 00:12:11.951338 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 00:12:12 crc kubenswrapper[4697]: I0126 00:12:12.166540 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 00:12:12 crc kubenswrapper[4697]: I0126 00:12:12.379138 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 00:12:12 crc kubenswrapper[4697]: I0126 00:12:12.426452 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 00:12:12 crc kubenswrapper[4697]: I0126 00:12:12.440882 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 00:12:12 crc kubenswrapper[4697]: I0126 00:12:12.515738 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 00:12:12 crc kubenswrapper[4697]: I0126 00:12:12.684130 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 00:12:12 crc kubenswrapper[4697]: I0126 00:12:12.854337 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.024248 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.032950 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.062291 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.237061 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.449704 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.511995 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.678401 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.794664 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.877088 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.877854 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.923126 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 00:12:13 crc kubenswrapper[4697]: I0126 00:12:13.936536 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.000590 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.103111 4697 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.448722 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.451908 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.623295 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.627586 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.660061 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.776539 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.780489 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.800658 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.898237 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 00:12:14 crc kubenswrapper[4697]: I0126 00:12:14.899641 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 00:12:15 crc kubenswrapper[4697]: I0126 00:12:15.333558 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 00:12:15 crc kubenswrapper[4697]: I0126 00:12:15.602690 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 00:12:15 crc kubenswrapper[4697]: I0126 00:12:15.895750 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.002519 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.008649 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.223885 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.313341 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.321720 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.336972 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.368508 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.462438 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.489334 4697 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.741262 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 00:12:16 crc kubenswrapper[4697]: I0126 00:12:16.972465 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.300172 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.313809 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.360422 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.428952 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.449961 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.627611 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.764114 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.774227 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.874307 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.932707 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 00:12:17 crc kubenswrapper[4697]: I0126 00:12:17.966764 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.206022 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.207629 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.207691 4697 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b2257fe997abe3e95c8ef54d52d322e19ea06aca7162dd71ba1e6e04aa0e9a68" exitCode=137 Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.207725 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b2257fe997abe3e95c8ef54d52d322e19ea06aca7162dd71ba1e6e04aa0e9a68"} Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.207759 4697 scope.go:117] "RemoveContainer" containerID="014ea45cb561922d0794b73c274ceb23ea5557ecd6f1ba21b25009a5c108c163" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.405833 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.411468 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.463238 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.587062 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.644398 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.654619 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.664774 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.674788 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.694501 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.706530 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.828784 4697 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.829446 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tp2xk" podStartSLOduration=63.849430972 podStartE2EDuration="2m55.829417408s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="2026-01-26 00:09:25.121699321 +0000 UTC m=+106.758476711" lastFinishedPulling="2026-01-26 00:11:17.101685757 +0000 UTC m=+218.738463147" observedRunningTime="2026-01-26 00:11:31.201284195 +0000 UTC m=+232.838061585" watchObservedRunningTime="2026-01-26 00:12:18.829417408 +0000 UTC m=+280.466194788" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.831697 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9f66" podStartSLOduration=69.660019154 podStartE2EDuration="2m58.831674219s" podCreationTimestamp="2026-01-26 00:09:20 +0000 UTC" firstStartedPulling="2026-01-26 00:09:23.035650566 +0000 UTC m=+104.672427956" lastFinishedPulling="2026-01-26 00:11:12.207305631 +0000 UTC m=+213.844083021" observedRunningTime="2026-01-26 00:11:31.175226585 +0000 UTC m=+232.812003975" watchObservedRunningTime="2026-01-26 00:12:18.831674219 +0000 UTC m=+280.468451619" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.831823 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wklhb" podStartSLOduration=64.773879234 podStartE2EDuration="2m58.831817294s" podCreationTimestamp="2026-01-26 00:09:20 +0000 UTC" firstStartedPulling="2026-01-26 00:09:23.046395087 +0000 UTC m=+104.683172477" lastFinishedPulling="2026-01-26 00:11:17.104333147 +0000 UTC m=+218.741110537" observedRunningTime="2026-01-26 00:11:31.04271215 +0000 UTC m=+232.679489530" watchObservedRunningTime="2026-01-26 00:12:18.831817294 +0000 UTC m=+280.468594694" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.831916 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xbn5x" podStartSLOduration=71.207685113 podStartE2EDuration="2m56.831911327s" podCreationTimestamp="2026-01-26 00:09:22 +0000 UTC" firstStartedPulling="2026-01-26 00:09:25.098188948 +0000 UTC m=+106.734966338" lastFinishedPulling="2026-01-26 00:11:10.722415142 +0000 UTC m=+212.359192552" observedRunningTime="2026-01-26 00:11:31.287434515 +0000 UTC m=+232.924211905" watchObservedRunningTime="2026-01-26 00:12:18.831911327 +0000 UTC m=+280.468688727" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.832188 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w4m7r" podStartSLOduration=63.738915102 podStartE2EDuration="2m58.832180795s" podCreationTimestamp="2026-01-26 00:09:20 +0000 UTC" firstStartedPulling="2026-01-26 00:09:22.011149547 +0000 UTC m=+103.647926937" lastFinishedPulling="2026-01-26 00:11:17.10441524 +0000 UTC m=+218.741192630" observedRunningTime="2026-01-26 00:11:31.31267679 +0000 UTC m=+232.949454180" watchObservedRunningTime="2026-01-26 00:12:18.832180795 +0000 UTC m=+280.468958195" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.832982 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=81.832974041 podStartE2EDuration="1m21.832974041s" podCreationTimestamp="2026-01-26 00:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:31.345937057 +0000 UTC m=+232.982714447" watchObservedRunningTime="2026-01-26 00:12:18.832974041 +0000 UTC m=+280.469751441" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.833557 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kxm8l" podStartSLOduration=63.740451892 podStartE2EDuration="2m58.833550429s" podCreationTimestamp="2026-01-26 00:09:20 +0000 UTC" firstStartedPulling="2026-01-26 00:09:22.010603551 +0000 UTC m=+103.647380941" lastFinishedPulling="2026-01-26 00:11:17.103702088 +0000 UTC m=+218.740479478" observedRunningTime="2026-01-26 00:11:31.149757214 +0000 UTC m=+232.786534604" watchObservedRunningTime="2026-01-26 00:12:18.833550429 +0000 UTC m=+280.470327829" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.833674 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s7lfb" podStartSLOduration=64.834729569 podStartE2EDuration="2m56.833669882s" podCreationTimestamp="2026-01-26 00:09:22 +0000 UTC" firstStartedPulling="2026-01-26 00:09:25.105466126 +0000 UTC m=+106.742243516" lastFinishedPulling="2026-01-26 00:11:17.104406439 +0000 UTC m=+218.741183829" observedRunningTime="2026-01-26 00:11:31.23909161 +0000 UTC m=+232.875869010" watchObservedRunningTime="2026-01-26 00:12:18.833669882 +0000 UTC m=+280.470447292" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.835842 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82kcj" podStartSLOduration=68.542310146 podStartE2EDuration="2m55.835835441s" podCreationTimestamp="2026-01-26 00:09:23 +0000 UTC" firstStartedPulling="2026-01-26 00:09:26.13019441 +0000 UTC m=+107.766971800" lastFinishedPulling="2026-01-26 00:11:13.423719705 +0000 UTC m=+215.060497095" observedRunningTime="2026-01-26 00:11:31.270156951 +0000 UTC m=+232.906934341" watchObservedRunningTime="2026-01-26 00:12:18.835835441 +0000 UTC m=+280.472612851" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.836468 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.836512 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.836538 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-586d5b9769-lbpdf"] Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.837246 4697 scope.go:117] "RemoveContainer" containerID="d3a3c8f8253053866f3abfbf6e871555d3b07a51801eb6d2fbdb222143870154" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.839505 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.848047 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.848149 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 00:12:18 crc kubenswrapper[4697]: I0126 00:12:18.864330 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=48.864304449 podStartE2EDuration="48.864304449s" podCreationTimestamp="2026-01-26 00:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:18.860328193 +0000 UTC m=+280.497105603" watchObservedRunningTime="2026-01-26 00:12:18.864304449 +0000 UTC m=+280.501081839" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.165961 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.216437 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.278124 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.281604 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.290494 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.564049 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.793848 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.862480 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.967496 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 00:12:19 crc kubenswrapper[4697]: I0126 00:12:19.989782 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.039289 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.179694 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.225889 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-586d5b9769-lbpdf_b0ac8fa9-d85e-44d7-83f1-2118290c4013/oauth-openshift/2.log" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.226008 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" event={"ID":"b0ac8fa9-d85e-44d7-83f1-2118290c4013","Type":"ContainerStarted","Data":"24d1124f11400dc232143680a3c3067cf2d30cf7cf42d7e8c11536c9d72b12ca"} Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.226461 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.228467 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.229792 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1af76966fbb01771891653c08fb6bc243f39245f7e6a576eb76f34d6f775a361"} Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.231921 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.248550 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.249323 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-586d5b9769-lbpdf" podStartSLOduration=109.24930529 podStartE2EDuration="1m49.24930529s" podCreationTimestamp="2026-01-26 00:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:11:33.909187157 +0000 UTC m=+235.545964577" watchObservedRunningTime="2026-01-26 00:12:20.24930529 +0000 UTC m=+281.886082680" Jan 26 00:12:20 crc kubenswrapper[4697]: I0126 00:12:20.354226 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.648297 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.672605 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.692515 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.698975 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.725104 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.860480 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.883786 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.886448 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.938376 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:20.947496 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.047540 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.081279 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.111414 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.295236 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.432434 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.493554 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.529354 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.625306 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.877163 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.897306 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.922899 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 00:12:21 crc kubenswrapper[4697]: I0126 00:12:21.954695 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.246496 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerID="ba7403b9b3e81646d596b543e574b6f3855b17ac1b59e92a0e61f0f91ec33e50" exitCode=0 Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.246576 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" event={"ID":"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a","Type":"ContainerDied","Data":"ba7403b9b3e81646d596b543e574b6f3855b17ac1b59e92a0e61f0f91ec33e50"} Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.247773 4697 scope.go:117] "RemoveContainer" containerID="ba7403b9b3e81646d596b543e574b6f3855b17ac1b59e92a0e61f0f91ec33e50" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.427596 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.646024 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.646333 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.748136 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.862412 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.916139 4697 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.970441 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 00:12:22 crc kubenswrapper[4697]: I0126 00:12:22.985636 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.028450 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.062285 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.112213 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.149754 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.253201 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" event={"ID":"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a","Type":"ContainerStarted","Data":"1ce084e008b9cadd4c788d20c33c7a00239b5cfabade521c746edcaee4be6fef"} Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.253597 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.256768 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.331667 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.372937 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.440685 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.465739 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.466353 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.720113 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.726600 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.893526 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.934618 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.948946 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 00:12:23 crc kubenswrapper[4697]: I0126 00:12:23.953016 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.006711 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.007744 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.179983 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.211196 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.258586 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.508783 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.542797 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.679447 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.770949 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.808991 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.835147 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.927509 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 00:12:24 crc kubenswrapper[4697]: I0126 00:12:24.956631 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 00:12:25 crc kubenswrapper[4697]: I0126 00:12:25.262126 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 00:12:26 crc kubenswrapper[4697]: I0126 00:12:26.603913 4697 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:12:26 crc kubenswrapper[4697]: I0126 00:12:26.604319 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf" gracePeriod=5 Jan 26 00:12:26 crc kubenswrapper[4697]: I0126 00:12:26.652565 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 00:12:26 crc kubenswrapper[4697]: I0126 00:12:26.759568 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 00:12:26 crc kubenswrapper[4697]: I0126 00:12:26.795941 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 00:12:26 crc kubenswrapper[4697]: I0126 00:12:26.930045 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 00:12:26 crc kubenswrapper[4697]: I0126 00:12:26.997501 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 00:12:27 crc kubenswrapper[4697]: I0126 00:12:27.276199 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 00:12:27 crc kubenswrapper[4697]: I0126 00:12:27.547365 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 00:12:28 crc kubenswrapper[4697]: I0126 00:12:28.080229 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 00:12:28 crc kubenswrapper[4697]: I0126 00:12:28.167094 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 00:12:28 crc kubenswrapper[4697]: I0126 00:12:28.712946 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 00:12:28 crc kubenswrapper[4697]: I0126 00:12:28.943416 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.046614 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.201361 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.324519 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.387565 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.416580 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.433868 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.561403 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.567213 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 00:12:29 crc kubenswrapper[4697]: I0126 00:12:29.712592 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 00:12:31 crc kubenswrapper[4697]: I0126 00:12:31.413599 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 00:12:31 crc kubenswrapper[4697]: I0126 00:12:31.886833 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 00:12:32 crc kubenswrapper[4697]: I0126 00:12:32.233667 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 00:12:32 crc kubenswrapper[4697]: I0126 00:12:32.692751 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 00:12:32 crc kubenswrapper[4697]: I0126 00:12:32.931939 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 00:12:32 crc kubenswrapper[4697]: I0126 00:12:32.932040 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.039742 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.039800 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.039824 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.039935 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.039922 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.040522 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.040689 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.040847 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.040860 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.041167 4697 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.041183 4697 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.041192 4697 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.041202 4697 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.048374 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.050377 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.097391 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.143869 4697 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.294307 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.308942 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.309000 4697 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf" exitCode=137 Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.309047 4697 scope.go:117] "RemoveContainer" containerID="ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.309142 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.329869 4697 scope.go:117] "RemoveContainer" containerID="ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf" Jan 26 00:12:33 crc kubenswrapper[4697]: E0126 00:12:33.330354 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf\": container with ID starting with ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf not found: ID does not exist" containerID="ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf" Jan 26 00:12:33 crc kubenswrapper[4697]: I0126 00:12:33.330393 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf"} err="failed to get container status \"ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf\": rpc error: code = NotFound desc = could not find container \"ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf\": container with ID starting with ca44f51c30864648bf0fc56ff13558d04001ae67bb4a4feec6e34e94352fa4bf not found: ID does not exist" Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.124812 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.439789 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.675792 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.676027 4697 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.686191 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.686257 4697 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f4d143c7-0c18-44f6-a552-25ecd6af4db2" Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.689480 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 00:12:34 crc kubenswrapper[4697]: I0126 00:12:34.689517 4697 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f4d143c7-0c18-44f6-a552-25ecd6af4db2" Jan 26 00:12:38 crc kubenswrapper[4697]: I0126 00:12:38.520270 4697 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.065791 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccrgz"] Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.066462 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" podUID="7a5c64d6-6db8-486b-9d26-0b46adccec09" containerName="controller-manager" containerID="cri-o://e9e524060ad1861dabb4e85495424797ba26d57822e4e580aed50d0ba2b3b989" gracePeriod=30 Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.330501 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x"] Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.330714 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" podUID="0ed3e7cf-192c-47e4-8e75-9d89cda7c136" containerName="route-controller-manager" containerID="cri-o://8cd09b04cf369fb266fb8e718df75806b12cfb838b04c4fa404259e2882110ce" gracePeriod=30 Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.375279 4697 generic.go:334] "Generic (PLEG): container finished" podID="7a5c64d6-6db8-486b-9d26-0b46adccec09" containerID="e9e524060ad1861dabb4e85495424797ba26d57822e4e580aed50d0ba2b3b989" exitCode=0 Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.375329 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" event={"ID":"7a5c64d6-6db8-486b-9d26-0b46adccec09","Type":"ContainerDied","Data":"e9e524060ad1861dabb4e85495424797ba26d57822e4e580aed50d0ba2b3b989"} Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.608413 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.689164 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-proxy-ca-bundles\") pod \"7a5c64d6-6db8-486b-9d26-0b46adccec09\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.689234 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v6wr\" (UniqueName: \"kubernetes.io/projected/7a5c64d6-6db8-486b-9d26-0b46adccec09-kube-api-access-4v6wr\") pod \"7a5c64d6-6db8-486b-9d26-0b46adccec09\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.690275 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a5c64d6-6db8-486b-9d26-0b46adccec09" (UID: "7a5c64d6-6db8-486b-9d26-0b46adccec09"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.690624 4697 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.695546 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5c64d6-6db8-486b-9d26-0b46adccec09-kube-api-access-4v6wr" (OuterVolumeSpecName: "kube-api-access-4v6wr") pod "7a5c64d6-6db8-486b-9d26-0b46adccec09" (UID: "7a5c64d6-6db8-486b-9d26-0b46adccec09"). InnerVolumeSpecName "kube-api-access-4v6wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.791259 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-client-ca\") pod \"7a5c64d6-6db8-486b-9d26-0b46adccec09\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.791317 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-config\") pod \"7a5c64d6-6db8-486b-9d26-0b46adccec09\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.791356 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5c64d6-6db8-486b-9d26-0b46adccec09-serving-cert\") pod \"7a5c64d6-6db8-486b-9d26-0b46adccec09\" (UID: \"7a5c64d6-6db8-486b-9d26-0b46adccec09\") " Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.791512 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v6wr\" (UniqueName: \"kubernetes.io/projected/7a5c64d6-6db8-486b-9d26-0b46adccec09-kube-api-access-4v6wr\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.792277 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a5c64d6-6db8-486b-9d26-0b46adccec09" (UID: "7a5c64d6-6db8-486b-9d26-0b46adccec09"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.792478 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-config" (OuterVolumeSpecName: "config") pod "7a5c64d6-6db8-486b-9d26-0b46adccec09" (UID: "7a5c64d6-6db8-486b-9d26-0b46adccec09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.794194 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5c64d6-6db8-486b-9d26-0b46adccec09-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a5c64d6-6db8-486b-9d26-0b46adccec09" (UID: "7a5c64d6-6db8-486b-9d26-0b46adccec09"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.892364 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.892395 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5c64d6-6db8-486b-9d26-0b46adccec09-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:43 crc kubenswrapper[4697]: I0126 00:12:43.892404 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5c64d6-6db8-486b-9d26-0b46adccec09-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.384282 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" event={"ID":"7a5c64d6-6db8-486b-9d26-0b46adccec09","Type":"ContainerDied","Data":"32442d84c73e37df4987257c413ad9db75b4ce87bfc5584b379c8ba2f80ea975"} Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.384348 4697 scope.go:117] "RemoveContainer" containerID="e9e524060ad1861dabb4e85495424797ba26d57822e4e580aed50d0ba2b3b989" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.384357 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-ccrgz" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.387418 4697 generic.go:334] "Generic (PLEG): container finished" podID="0ed3e7cf-192c-47e4-8e75-9d89cda7c136" containerID="8cd09b04cf369fb266fb8e718df75806b12cfb838b04c4fa404259e2882110ce" exitCode=0 Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.387483 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" event={"ID":"0ed3e7cf-192c-47e4-8e75-9d89cda7c136","Type":"ContainerDied","Data":"8cd09b04cf369fb266fb8e718df75806b12cfb838b04c4fa404259e2882110ce"} Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.450880 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.468045 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccrgz"] Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.477508 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-ccrgz"] Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.483892 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9b4cf4898-4k5rc"] Jan 26 00:12:44 crc kubenswrapper[4697]: E0126 00:12:44.484278 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" containerName="installer" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484294 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" containerName="installer" Jan 26 00:12:44 crc kubenswrapper[4697]: E0126 00:12:44.484309 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5c64d6-6db8-486b-9d26-0b46adccec09" containerName="controller-manager" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484319 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5c64d6-6db8-486b-9d26-0b46adccec09" containerName="controller-manager" Jan 26 00:12:44 crc kubenswrapper[4697]: E0126 00:12:44.484330 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484337 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 00:12:44 crc kubenswrapper[4697]: E0126 00:12:44.484345 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ed3e7cf-192c-47e4-8e75-9d89cda7c136" containerName="route-controller-manager" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484355 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed3e7cf-192c-47e4-8e75-9d89cda7c136" containerName="route-controller-manager" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484482 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb0379c-8f6d-4151-8552-d891cf28c05b" containerName="installer" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484496 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484507 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5c64d6-6db8-486b-9d26-0b46adccec09" containerName="controller-manager" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.484516 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ed3e7cf-192c-47e4-8e75-9d89cda7c136" containerName="route-controller-manager" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.485041 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.487432 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.487470 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.488390 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.488512 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.488979 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.489156 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.490792 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b4cf4898-4k5rc"] Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.493911 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498203 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-config\") pod \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498294 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-client-ca\") pod \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498484 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsgz\" (UniqueName: \"kubernetes.io/projected/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-kube-api-access-5lsgz\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498530 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-client-ca\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498556 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-serving-cert\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498592 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-config\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498640 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-proxy-ca-bundles\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498951 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ed3e7cf-192c-47e4-8e75-9d89cda7c136" (UID: "0ed3e7cf-192c-47e4-8e75-9d89cda7c136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.498963 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-config" (OuterVolumeSpecName: "config") pod "0ed3e7cf-192c-47e4-8e75-9d89cda7c136" (UID: "0ed3e7cf-192c-47e4-8e75-9d89cda7c136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.599206 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqdg4\" (UniqueName: \"kubernetes.io/projected/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-kube-api-access-qqdg4\") pod \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.599328 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-serving-cert\") pod \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\" (UID: \"0ed3e7cf-192c-47e4-8e75-9d89cda7c136\") " Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.599538 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsgz\" (UniqueName: \"kubernetes.io/projected/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-kube-api-access-5lsgz\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.599772 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-client-ca\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.599826 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-serving-cert\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.599856 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-config\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.599909 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-proxy-ca-bundles\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.600002 4697 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.600027 4697 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.600742 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-client-ca\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.601232 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-config\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.601340 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-proxy-ca-bundles\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.602866 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-kube-api-access-qqdg4" (OuterVolumeSpecName: "kube-api-access-qqdg4") pod "0ed3e7cf-192c-47e4-8e75-9d89cda7c136" (UID: "0ed3e7cf-192c-47e4-8e75-9d89cda7c136"). InnerVolumeSpecName "kube-api-access-qqdg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.603708 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-serving-cert\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.603896 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ed3e7cf-192c-47e4-8e75-9d89cda7c136" (UID: "0ed3e7cf-192c-47e4-8e75-9d89cda7c136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.622101 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsgz\" (UniqueName: \"kubernetes.io/projected/f6e2868c-b6eb-4fbd-8502-33a46335e4bc-kube-api-access-5lsgz\") pod \"controller-manager-9b4cf4898-4k5rc\" (UID: \"f6e2868c-b6eb-4fbd-8502-33a46335e4bc\") " pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.667876 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5c64d6-6db8-486b-9d26-0b46adccec09" path="/var/lib/kubelet/pods/7a5c64d6-6db8-486b-9d26-0b46adccec09/volumes" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.701294 4697 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.701361 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqdg4\" (UniqueName: \"kubernetes.io/projected/0ed3e7cf-192c-47e4-8e75-9d89cda7c136-kube-api-access-qqdg4\") on node \"crc\" DevicePath \"\"" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.809979 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.893990 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk"] Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.895663 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.904875 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65ce24c-8baf-49f4-be20-e505526f83d8-serving-cert\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.904938 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f65ce24c-8baf-49f4-be20-e505526f83d8-client-ca\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.905041 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65ce24c-8baf-49f4-be20-e505526f83d8-config\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.905093 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9tzz\" (UniqueName: \"kubernetes.io/projected/f65ce24c-8baf-49f4-be20-e505526f83d8-kube-api-access-q9tzz\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:44 crc kubenswrapper[4697]: I0126 00:12:44.921713 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk"] Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.005224 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b4cf4898-4k5rc"] Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.006974 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f65ce24c-8baf-49f4-be20-e505526f83d8-client-ca\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.007063 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65ce24c-8baf-49f4-be20-e505526f83d8-config\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.007136 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9tzz\" (UniqueName: \"kubernetes.io/projected/f65ce24c-8baf-49f4-be20-e505526f83d8-kube-api-access-q9tzz\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.007162 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65ce24c-8baf-49f4-be20-e505526f83d8-serving-cert\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.009635 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65ce24c-8baf-49f4-be20-e505526f83d8-config\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.010271 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f65ce24c-8baf-49f4-be20-e505526f83d8-client-ca\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.011330 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65ce24c-8baf-49f4-be20-e505526f83d8-serving-cert\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: W0126 00:12:45.011378 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6e2868c_b6eb_4fbd_8502_33a46335e4bc.slice/crio-de804e5d5438499175eaa134ccf5018d3a60a4048f74fc641ad5e85a48131b5f WatchSource:0}: Error finding container de804e5d5438499175eaa134ccf5018d3a60a4048f74fc641ad5e85a48131b5f: Status 404 returned error can't find the container with id de804e5d5438499175eaa134ccf5018d3a60a4048f74fc641ad5e85a48131b5f Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.031902 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9tzz\" (UniqueName: \"kubernetes.io/projected/f65ce24c-8baf-49f4-be20-e505526f83d8-kube-api-access-q9tzz\") pod \"route-controller-manager-85b5cfd4c6-9bmsk\" (UID: \"f65ce24c-8baf-49f4-be20-e505526f83d8\") " pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.209456 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.397186 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" event={"ID":"f6e2868c-b6eb-4fbd-8502-33a46335e4bc","Type":"ContainerStarted","Data":"5e8d3396b5581d40a83bbe027feb0923cc73728b62fa540acbaf82b5facf57a7"} Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.397235 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" event={"ID":"f6e2868c-b6eb-4fbd-8502-33a46335e4bc","Type":"ContainerStarted","Data":"de804e5d5438499175eaa134ccf5018d3a60a4048f74fc641ad5e85a48131b5f"} Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.397706 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.403507 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.403549 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" event={"ID":"0ed3e7cf-192c-47e4-8e75-9d89cda7c136","Type":"ContainerDied","Data":"ecd465b0c0a8493f35248c44f3624e276a918509b49f4af97b11529ca4e5866c"} Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.403582 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.403586 4697 scope.go:117] "RemoveContainer" containerID="8cd09b04cf369fb266fb8e718df75806b12cfb838b04c4fa404259e2882110ce" Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.435683 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9b4cf4898-4k5rc" podStartSLOduration=1.43566228 podStartE2EDuration="1.43566228s" podCreationTimestamp="2026-01-26 00:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:45.423446375 +0000 UTC m=+307.060223785" watchObservedRunningTime="2026-01-26 00:12:45.43566228 +0000 UTC m=+307.072439670" Jan 26 00:12:45 crc kubenswrapper[4697]: W0126 00:12:45.454474 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65ce24c_8baf_49f4_be20_e505526f83d8.slice/crio-cc75f4bfbae5a591831a66d863d6abec9aec3e439d4d008cdc8ba75565a0abdc WatchSource:0}: Error finding container cc75f4bfbae5a591831a66d863d6abec9aec3e439d4d008cdc8ba75565a0abdc: Status 404 returned error can't find the container with id cc75f4bfbae5a591831a66d863d6abec9aec3e439d4d008cdc8ba75565a0abdc Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.454731 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x"] Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.468189 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-b798x"] Jan 26 00:12:45 crc kubenswrapper[4697]: I0126 00:12:45.483596 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk"] Jan 26 00:12:46 crc kubenswrapper[4697]: I0126 00:12:46.414150 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" event={"ID":"f65ce24c-8baf-49f4-be20-e505526f83d8","Type":"ContainerStarted","Data":"e460da650b17236ab18f008ef3dc1193b38c2ee8f52f958ad48fe907769ac56e"} Jan 26 00:12:46 crc kubenswrapper[4697]: I0126 00:12:46.414669 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" event={"ID":"f65ce24c-8baf-49f4-be20-e505526f83d8","Type":"ContainerStarted","Data":"cc75f4bfbae5a591831a66d863d6abec9aec3e439d4d008cdc8ba75565a0abdc"} Jan 26 00:12:46 crc kubenswrapper[4697]: I0126 00:12:46.419515 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:46 crc kubenswrapper[4697]: I0126 00:12:46.423418 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" Jan 26 00:12:46 crc kubenswrapper[4697]: I0126 00:12:46.449099 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85b5cfd4c6-9bmsk" podStartSLOduration=3.449081261 podStartE2EDuration="3.449081261s" podCreationTimestamp="2026-01-26 00:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:12:46.44905878 +0000 UTC m=+308.085836190" watchObservedRunningTime="2026-01-26 00:12:46.449081261 +0000 UTC m=+308.085858651" Jan 26 00:12:46 crc kubenswrapper[4697]: I0126 00:12:46.667663 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed3e7cf-192c-47e4-8e75-9d89cda7c136" path="/var/lib/kubelet/pods/0ed3e7cf-192c-47e4-8e75-9d89cda7c136/volumes" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.769545 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dsl8p"] Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.771057 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.795534 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dsl8p"] Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.912992 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-kube-api-access-5wdq5\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.913094 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-bound-sa-token\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.913128 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-registry-tls\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.913153 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3d1531d-c792-4651-bb37-b797edfaf5e4-registry-certificates\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.913180 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3d1531d-c792-4651-bb37-b797edfaf5e4-trusted-ca\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.913208 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3d1531d-c792-4651-bb37-b797edfaf5e4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.913584 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3d1531d-c792-4651-bb37-b797edfaf5e4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.913673 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:11 crc kubenswrapper[4697]: I0126 00:13:11.944575 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.014762 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3d1531d-c792-4651-bb37-b797edfaf5e4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.014856 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3d1531d-c792-4651-bb37-b797edfaf5e4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.014916 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-kube-api-access-5wdq5\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.014940 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-bound-sa-token\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.014963 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-registry-tls\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.014983 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3d1531d-c792-4651-bb37-b797edfaf5e4-registry-certificates\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.014998 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3d1531d-c792-4651-bb37-b797edfaf5e4-trusted-ca\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.015806 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c3d1531d-c792-4651-bb37-b797edfaf5e4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.016834 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c3d1531d-c792-4651-bb37-b797edfaf5e4-trusted-ca\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.016997 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c3d1531d-c792-4651-bb37-b797edfaf5e4-registry-certificates\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.026136 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-registry-tls\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.026499 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c3d1531d-c792-4651-bb37-b797edfaf5e4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.036096 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-bound-sa-token\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.037895 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wdq5\" (UniqueName: \"kubernetes.io/projected/c3d1531d-c792-4651-bb37-b797edfaf5e4-kube-api-access-5wdq5\") pod \"image-registry-66df7c8f76-dsl8p\" (UID: \"c3d1531d-c792-4651-bb37-b797edfaf5e4\") " pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.091577 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:12 crc kubenswrapper[4697]: I0126 00:13:12.670851 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-dsl8p"] Jan 26 00:13:12 crc kubenswrapper[4697]: W0126 00:13:12.675932 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d1531d_c792_4651_bb37_b797edfaf5e4.slice/crio-5f05b6d3a513021ece450ced825b083c124eda48b4802c77e743cf45260c9131 WatchSource:0}: Error finding container 5f05b6d3a513021ece450ced825b083c124eda48b4802c77e743cf45260c9131: Status 404 returned error can't find the container with id 5f05b6d3a513021ece450ced825b083c124eda48b4802c77e743cf45260c9131 Jan 26 00:13:13 crc kubenswrapper[4697]: I0126 00:13:13.577270 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" event={"ID":"c3d1531d-c792-4651-bb37-b797edfaf5e4","Type":"ContainerStarted","Data":"5f05b6d3a513021ece450ced825b083c124eda48b4802c77e743cf45260c9131"} Jan 26 00:13:14 crc kubenswrapper[4697]: I0126 00:13:14.583410 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" event={"ID":"c3d1531d-c792-4651-bb37-b797edfaf5e4","Type":"ContainerStarted","Data":"881b20072909806eb94d4cd454993764d20de66fb1f8a49acb4625946128791e"} Jan 26 00:13:14 crc kubenswrapper[4697]: I0126 00:13:14.583709 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:14 crc kubenswrapper[4697]: I0126 00:13:14.602607 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" podStartSLOduration=3.60259141 podStartE2EDuration="3.60259141s" podCreationTimestamp="2026-01-26 00:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:14.599201573 +0000 UTC m=+336.235978963" watchObservedRunningTime="2026-01-26 00:13:14.60259141 +0000 UTC m=+336.239368800" Jan 26 00:13:32 crc kubenswrapper[4697]: I0126 00:13:32.099307 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-dsl8p" Jan 26 00:13:32 crc kubenswrapper[4697]: I0126 00:13:32.159393 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xls7q"] Jan 26 00:13:36 crc kubenswrapper[4697]: I0126 00:13:36.329044 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:13:36 crc kubenswrapper[4697]: I0126 00:13:36.329574 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:13:40 crc kubenswrapper[4697]: I0126 00:13:40.236334 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wklhb"] Jan 26 00:13:40 crc kubenswrapper[4697]: I0126 00:13:40.237302 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wklhb" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="registry-server" containerID="cri-o://0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93" gracePeriod=2 Jan 26 00:13:40 crc kubenswrapper[4697]: I0126 00:13:40.433025 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9f66"] Jan 26 00:13:40 crc kubenswrapper[4697]: I0126 00:13:40.433450 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p9f66" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="registry-server" containerID="cri-o://544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d" gracePeriod=2 Jan 26 00:13:40 crc kubenswrapper[4697]: E0126 00:13:40.937035 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93 is running failed: container process not found" containerID="0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:40 crc kubenswrapper[4697]: E0126 00:13:40.938060 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93 is running failed: container process not found" containerID="0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:40 crc kubenswrapper[4697]: E0126 00:13:40.938904 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93 is running failed: container process not found" containerID="0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:40 crc kubenswrapper[4697]: E0126 00:13:40.939156 4697 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-wklhb" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="registry-server" Jan 26 00:13:41 crc kubenswrapper[4697]: E0126 00:13:41.201929 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d is running failed: container process not found" containerID="544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:41 crc kubenswrapper[4697]: E0126 00:13:41.202315 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d is running failed: container process not found" containerID="544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:41 crc kubenswrapper[4697]: E0126 00:13:41.202586 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d is running failed: container process not found" containerID="544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:41 crc kubenswrapper[4697]: E0126 00:13:41.202616 4697 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-p9f66" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="registry-server" Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.628395 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7lfb"] Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.628690 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s7lfb" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="registry-server" containerID="cri-o://89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b" gracePeriod=2 Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.756519 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f66" event={"ID":"9cfec86d-c03e-4a9d-8571-a233cba73af1","Type":"ContainerDied","Data":"544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d"} Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.756581 4697 generic.go:334] "Generic (PLEG): container finished" podID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerID="544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d" exitCode=0 Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.760661 4697 generic.go:334] "Generic (PLEG): container finished" podID="d7db0326-548c-4c19-86c3-15af398d39cb" containerID="0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93" exitCode=0 Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.760691 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wklhb" event={"ID":"d7db0326-548c-4c19-86c3-15af398d39cb","Type":"ContainerDied","Data":"0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93"} Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.827461 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82kcj"] Jan 26 00:13:42 crc kubenswrapper[4697]: I0126 00:13:42.827696 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82kcj" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="registry-server" containerID="cri-o://208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4" gracePeriod=2 Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:42.994664 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.052775 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:13:43 crc kubenswrapper[4697]: E0126 00:13:43.127359 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b is running failed: container process not found" containerID="89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:43 crc kubenswrapper[4697]: E0126 00:13:43.127872 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b is running failed: container process not found" containerID="89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:43 crc kubenswrapper[4697]: E0126 00:13:43.128166 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b is running failed: container process not found" containerID="89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:43 crc kubenswrapper[4697]: E0126 00:13:43.128254 4697 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-s7lfb" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="registry-server" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.148843 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-catalog-content\") pod \"d7db0326-548c-4c19-86c3-15af398d39cb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.148929 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-utilities\") pod \"d7db0326-548c-4c19-86c3-15af398d39cb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.149042 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcmxw\" (UniqueName: \"kubernetes.io/projected/d7db0326-548c-4c19-86c3-15af398d39cb-kube-api-access-rcmxw\") pod \"d7db0326-548c-4c19-86c3-15af398d39cb\" (UID: \"d7db0326-548c-4c19-86c3-15af398d39cb\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.149928 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-utilities" (OuterVolumeSpecName: "utilities") pod "d7db0326-548c-4c19-86c3-15af398d39cb" (UID: "d7db0326-548c-4c19-86c3-15af398d39cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.155394 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7db0326-548c-4c19-86c3-15af398d39cb-kube-api-access-rcmxw" (OuterVolumeSpecName: "kube-api-access-rcmxw") pod "d7db0326-548c-4c19-86c3-15af398d39cb" (UID: "d7db0326-548c-4c19-86c3-15af398d39cb"). InnerVolumeSpecName "kube-api-access-rcmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.198469 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7db0326-548c-4c19-86c3-15af398d39cb" (UID: "d7db0326-548c-4c19-86c3-15af398d39cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.250491 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-catalog-content\") pod \"9cfec86d-c03e-4a9d-8571-a233cba73af1\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.250597 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-utilities\") pod \"9cfec86d-c03e-4a9d-8571-a233cba73af1\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.250637 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgrgk\" (UniqueName: \"kubernetes.io/projected/9cfec86d-c03e-4a9d-8571-a233cba73af1-kube-api-access-pgrgk\") pod \"9cfec86d-c03e-4a9d-8571-a233cba73af1\" (UID: \"9cfec86d-c03e-4a9d-8571-a233cba73af1\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.250855 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcmxw\" (UniqueName: \"kubernetes.io/projected/d7db0326-548c-4c19-86c3-15af398d39cb-kube-api-access-rcmxw\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.250890 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.250904 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7db0326-548c-4c19-86c3-15af398d39cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.251404 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-utilities" (OuterVolumeSpecName: "utilities") pod "9cfec86d-c03e-4a9d-8571-a233cba73af1" (UID: "9cfec86d-c03e-4a9d-8571-a233cba73af1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.253591 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cfec86d-c03e-4a9d-8571-a233cba73af1-kube-api-access-pgrgk" (OuterVolumeSpecName: "kube-api-access-pgrgk") pod "9cfec86d-c03e-4a9d-8571-a233cba73af1" (UID: "9cfec86d-c03e-4a9d-8571-a233cba73af1"). InnerVolumeSpecName "kube-api-access-pgrgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.293959 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cfec86d-c03e-4a9d-8571-a233cba73af1" (UID: "9cfec86d-c03e-4a9d-8571-a233cba73af1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.352543 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.352597 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfec86d-c03e-4a9d-8571-a233cba73af1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.352610 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgrgk\" (UniqueName: \"kubernetes.io/projected/9cfec86d-c03e-4a9d-8571-a233cba73af1-kube-api-access-pgrgk\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.766467 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9f66" event={"ID":"9cfec86d-c03e-4a9d-8571-a233cba73af1","Type":"ContainerDied","Data":"16e41de3d60b1a550d0fd6ff6d7d255ebe0ac422fea01e2f4e20501a701654e9"} Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.766516 4697 scope.go:117] "RemoveContainer" containerID="544700b9ed1ff882731f487d3cd17b8d7109f21ef21fe1bb580d4573f2971d3d" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.766625 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9f66" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.771433 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wklhb" event={"ID":"d7db0326-548c-4c19-86c3-15af398d39cb","Type":"ContainerDied","Data":"61b85b31675fbf15d96b82a5c489f56c3e2478ea4ac8cf7a9ccf7f070d6e40a6"} Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.771527 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wklhb" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.776030 4697 generic.go:334] "Generic (PLEG): container finished" podID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerID="208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4" exitCode=0 Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.776118 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82kcj" event={"ID":"dcfffbc4-4576-4314-b1a2-b990bd8dfa28","Type":"ContainerDied","Data":"208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4"} Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.778396 4697 generic.go:334] "Generic (PLEG): container finished" podID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerID="89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b" exitCode=0 Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.778419 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7lfb" event={"ID":"c086f88d-6f74-44d5-9728-f59ebcec3dce","Type":"ContainerDied","Data":"89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b"} Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.785982 4697 scope.go:117] "RemoveContainer" containerID="7d7cb96a8e5917cc78bc0d76ddca19cd78c69b627c7a98cfd93f5e39f590fce1" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.799420 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wklhb"] Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.802817 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wklhb"] Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.810361 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9f66"] Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.814875 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p9f66"] Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.823599 4697 scope.go:117] "RemoveContainer" containerID="1890c8635442c2ced8baec6223762103d6077b57226f1a14d9511aadea78057e" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.852271 4697 scope.go:117] "RemoveContainer" containerID="0a57f786105ee37eae6a90e96f823e70f6c76a95e7e6205b77b4727a63fc1c93" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.865009 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.869107 4697 scope.go:117] "RemoveContainer" containerID="0b4060f259824a1321bca117f98ee0203ba3ce2d81318c3d1b8251e82b7f85b9" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.959265 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-utilities\") pod \"c086f88d-6f74-44d5-9728-f59ebcec3dce\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.959387 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxg5m\" (UniqueName: \"kubernetes.io/projected/c086f88d-6f74-44d5-9728-f59ebcec3dce-kube-api-access-rxg5m\") pod \"c086f88d-6f74-44d5-9728-f59ebcec3dce\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.959439 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-catalog-content\") pod \"c086f88d-6f74-44d5-9728-f59ebcec3dce\" (UID: \"c086f88d-6f74-44d5-9728-f59ebcec3dce\") " Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.960059 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-utilities" (OuterVolumeSpecName: "utilities") pod "c086f88d-6f74-44d5-9728-f59ebcec3dce" (UID: "c086f88d-6f74-44d5-9728-f59ebcec3dce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.965158 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c086f88d-6f74-44d5-9728-f59ebcec3dce-kube-api-access-rxg5m" (OuterVolumeSpecName: "kube-api-access-rxg5m") pod "c086f88d-6f74-44d5-9728-f59ebcec3dce" (UID: "c086f88d-6f74-44d5-9728-f59ebcec3dce"). InnerVolumeSpecName "kube-api-access-rxg5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:43 crc kubenswrapper[4697]: I0126 00:13:43.991533 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c086f88d-6f74-44d5-9728-f59ebcec3dce" (UID: "c086f88d-6f74-44d5-9728-f59ebcec3dce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.024224 4697 scope.go:117] "RemoveContainer" containerID="3c176882c25a0bbff9627639af94549d03e7d05bf0efc766947a5b0354193b2b" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.060065 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxg5m\" (UniqueName: \"kubernetes.io/projected/c086f88d-6f74-44d5-9728-f59ebcec3dce-kube-api-access-rxg5m\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.060118 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.060128 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c086f88d-6f74-44d5-9728-f59ebcec3dce-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:44 crc kubenswrapper[4697]: E0126 00:13:44.083969 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4 is running failed: container process not found" containerID="208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:44 crc kubenswrapper[4697]: E0126 00:13:44.084488 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4 is running failed: container process not found" containerID="208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:44 crc kubenswrapper[4697]: E0126 00:13:44.084806 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4 is running failed: container process not found" containerID="208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:44 crc kubenswrapper[4697]: E0126 00:13:44.084837 4697 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-82kcj" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="registry-server" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.668264 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" path="/var/lib/kubelet/pods/9cfec86d-c03e-4a9d-8571-a233cba73af1/volumes" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.669188 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" path="/var/lib/kubelet/pods/d7db0326-548c-4c19-86c3-15af398d39cb/volumes" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.787230 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82kcj" event={"ID":"dcfffbc4-4576-4314-b1a2-b990bd8dfa28","Type":"ContainerDied","Data":"38370a5b99825fc943405515409b981c1018e1305990d0c91a2f57786f586d5f"} Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.787276 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38370a5b99825fc943405515409b981c1018e1305990d0c91a2f57786f586d5f" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.789300 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7lfb" event={"ID":"c086f88d-6f74-44d5-9728-f59ebcec3dce","Type":"ContainerDied","Data":"eac5549d0d7d01a7210cd121714d25d8bceb6f9e2637ee9efef41fb371a3e3fc"} Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.789338 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7lfb" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.789588 4697 scope.go:117] "RemoveContainer" containerID="89c4cc88d475c29d9b312a83049dbc530c7c5c3de425cc8361c91952eb4c768b" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.794443 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.806199 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7lfb"] Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.809096 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7lfb"] Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.823343 4697 scope.go:117] "RemoveContainer" containerID="dc21a5d3ed33e83118ad75e242a3f4c7bdd0c2af86922cec5fe63536dab0774a" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.850051 4697 scope.go:117] "RemoveContainer" containerID="a125d1d41e5e93a096e32cd085dcf1cdc575830bcf33edc5a5532b8b3b71a807" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.871752 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-utilities\") pod \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.871838 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-catalog-content\") pod \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.872848 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-utilities" (OuterVolumeSpecName: "utilities") pod "dcfffbc4-4576-4314-b1a2-b990bd8dfa28" (UID: "dcfffbc4-4576-4314-b1a2-b990bd8dfa28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.972898 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rqr\" (UniqueName: \"kubernetes.io/projected/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-kube-api-access-l8rqr\") pod \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\" (UID: \"dcfffbc4-4576-4314-b1a2-b990bd8dfa28\") " Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.973298 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.977964 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-kube-api-access-l8rqr" (OuterVolumeSpecName: "kube-api-access-l8rqr") pod "dcfffbc4-4576-4314-b1a2-b990bd8dfa28" (UID: "dcfffbc4-4576-4314-b1a2-b990bd8dfa28"). InnerVolumeSpecName "kube-api-access-l8rqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:44 crc kubenswrapper[4697]: I0126 00:13:44.982090 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcfffbc4-4576-4314-b1a2-b990bd8dfa28" (UID: "dcfffbc4-4576-4314-b1a2-b990bd8dfa28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:45 crc kubenswrapper[4697]: I0126 00:13:45.073922 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rqr\" (UniqueName: \"kubernetes.io/projected/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-kube-api-access-l8rqr\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:45 crc kubenswrapper[4697]: I0126 00:13:45.073967 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcfffbc4-4576-4314-b1a2-b990bd8dfa28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:45 crc kubenswrapper[4697]: I0126 00:13:45.803913 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82kcj" Jan 26 00:13:45 crc kubenswrapper[4697]: I0126 00:13:45.838323 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82kcj"] Jan 26 00:13:45 crc kubenswrapper[4697]: I0126 00:13:45.841287 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82kcj"] Jan 26 00:13:46 crc kubenswrapper[4697]: I0126 00:13:46.671448 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" path="/var/lib/kubelet/pods/c086f88d-6f74-44d5-9728-f59ebcec3dce/volumes" Jan 26 00:13:46 crc kubenswrapper[4697]: I0126 00:13:46.673000 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" path="/var/lib/kubelet/pods/dcfffbc4-4576-4314-b1a2-b990bd8dfa28/volumes" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.950695 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4m7r"] Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.951534 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w4m7r" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="registry-server" containerID="cri-o://efcb39628d70a560941d9f0100c54785197ff4d9085511ddaa57591f9ea4a972" gracePeriod=30 Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.955124 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxm8l"] Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.956595 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kxm8l" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="registry-server" containerID="cri-o://db411f20868abaa79cb5cb4bb1d0a0b698b2497cf0a8c77e83347b9b85f867d5" gracePeriod=30 Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.969930 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9f8xv"] Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.970183 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" containerID="cri-o://1ce084e008b9cadd4c788d20c33c7a00239b5cfabade521c746edcaee4be6fef" gracePeriod=30 Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.976149 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbn5x"] Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.976624 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xbn5x" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="registry-server" containerID="cri-o://e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811" gracePeriod=30 Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.989936 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xk"] Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.990177 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tp2xk" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="registry-server" containerID="cri-o://a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa" gracePeriod=30 Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993101 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t6xt4"] Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993277 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993288 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993299 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993305 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993314 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993320 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993339 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993344 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993355 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993360 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993370 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993375 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993386 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993392 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993400 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993406 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993415 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993420 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993426 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993432 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="extract-content" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993440 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993445 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="extract-utilities" Jan 26 00:13:50 crc kubenswrapper[4697]: E0126 00:13:50.993452 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993458 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993549 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cfec86d-c03e-4a9d-8571-a233cba73af1" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993561 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c086f88d-6f74-44d5-9728-f59ebcec3dce" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993568 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7db0326-548c-4c19-86c3-15af398d39cb" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993579 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcfffbc4-4576-4314-b1a2-b990bd8dfa28" containerName="registry-server" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.993914 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:50 crc kubenswrapper[4697]: I0126 00:13:50.998411 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t6xt4"] Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.167098 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.167150 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.167176 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wrhn\" (UniqueName: \"kubernetes.io/projected/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-kube-api-access-4wrhn\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.268318 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.268548 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.268574 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wrhn\" (UniqueName: \"kubernetes.io/projected/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-kube-api-access-4wrhn\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.270338 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.275143 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.285988 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wrhn\" (UniqueName: \"kubernetes.io/projected/4b52e277-1275-4d65-8d52-5dbdec0fd0cd-kube-api-access-4wrhn\") pod \"marketplace-operator-79b997595-t6xt4\" (UID: \"4b52e277-1275-4d65-8d52-5dbdec0fd0cd\") " pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.332398 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.750253 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t6xt4"] Jan 26 00:13:51 crc kubenswrapper[4697]: I0126 00:13:51.839682 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" event={"ID":"4b52e277-1275-4d65-8d52-5dbdec0fd0cd","Type":"ContainerStarted","Data":"3825b0fb5b2099c643789529e1c71b7d8b34c9b132f001b06a34e05f060e4515"} Jan 26 00:13:52 crc kubenswrapper[4697]: I0126 00:13:52.647936 4697 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-9f8xv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 26 00:13:52 crc kubenswrapper[4697]: I0126 00:13:52.648007 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 26 00:13:52 crc kubenswrapper[4697]: E0126 00:13:52.960970 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811 is running failed: container process not found" containerID="e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:52 crc kubenswrapper[4697]: E0126 00:13:52.961377 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811 is running failed: container process not found" containerID="e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:52 crc kubenswrapper[4697]: E0126 00:13:52.961676 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811 is running failed: container process not found" containerID="e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:52 crc kubenswrapper[4697]: E0126 00:13:52.961716 4697 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-xbn5x" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="registry-server" Jan 26 00:13:53 crc kubenswrapper[4697]: E0126 00:13:53.241291 4697 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2f51327_b54f_430d_8728_302b40279d68.slice/crio-conmon-efcb39628d70a560941d9f0100c54785197ff4d9085511ddaa57591f9ea4a972.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:13:53 crc kubenswrapper[4697]: E0126 00:13:53.705882 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa is running failed: container process not found" containerID="a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:53 crc kubenswrapper[4697]: E0126 00:13:53.706448 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa is running failed: container process not found" containerID="a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:53 crc kubenswrapper[4697]: E0126 00:13:53.706707 4697 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa is running failed: container process not found" containerID="a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa" cmd=["grpc_health_probe","-addr=:50051"] Jan 26 00:13:53 crc kubenswrapper[4697]: E0126 00:13:53.706741 4697 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tp2xk" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="registry-server" Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.853619 4697 generic.go:334] "Generic (PLEG): container finished" podID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerID="db411f20868abaa79cb5cb4bb1d0a0b698b2497cf0a8c77e83347b9b85f867d5" exitCode=0 Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.853696 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxm8l" event={"ID":"52a87d4f-2b9b-44c1-9457-cedcf68d8819","Type":"ContainerDied","Data":"db411f20868abaa79cb5cb4bb1d0a0b698b2497cf0a8c77e83347b9b85f867d5"} Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.856666 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2f51327-b54f-430d-8728-302b40279d68" containerID="efcb39628d70a560941d9f0100c54785197ff4d9085511ddaa57591f9ea4a972" exitCode=0 Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.856720 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerDied","Data":"efcb39628d70a560941d9f0100c54785197ff4d9085511ddaa57591f9ea4a972"} Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.859213 4697 generic.go:334] "Generic (PLEG): container finished" podID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerID="a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa" exitCode=0 Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.859295 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xk" event={"ID":"8355e146-dafa-45db-85a5-b1534eeb6b53","Type":"ContainerDied","Data":"a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa"} Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.860600 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" event={"ID":"4b52e277-1275-4d65-8d52-5dbdec0fd0cd","Type":"ContainerStarted","Data":"5cc9d3d694382e336f2a14c01abe96f878d6a6691b6ee3d0701dfd20b0f01136"} Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.860915 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.863194 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.867417 4697 generic.go:334] "Generic (PLEG): container finished" podID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerID="1ce084e008b9cadd4c788d20c33c7a00239b5cfabade521c746edcaee4be6fef" exitCode=0 Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.867489 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" event={"ID":"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a","Type":"ContainerDied","Data":"1ce084e008b9cadd4c788d20c33c7a00239b5cfabade521c746edcaee4be6fef"} Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.867526 4697 scope.go:117] "RemoveContainer" containerID="ba7403b9b3e81646d596b543e574b6f3855b17ac1b59e92a0e61f0f91ec33e50" Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.880757 4697 generic.go:334] "Generic (PLEG): container finished" podID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerID="e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811" exitCode=0 Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.880802 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerDied","Data":"e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811"} Jan 26 00:13:53 crc kubenswrapper[4697]: I0126 00:13:53.884460 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-t6xt4" podStartSLOduration=3.884437639 podStartE2EDuration="3.884437639s" podCreationTimestamp="2026-01-26 00:13:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:13:53.879362759 +0000 UTC m=+375.516140149" watchObservedRunningTime="2026-01-26 00:13:53.884437639 +0000 UTC m=+375.521215029" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.203560 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.311859 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvfhv\" (UniqueName: \"kubernetes.io/projected/e2f51327-b54f-430d-8728-302b40279d68-kube-api-access-hvfhv\") pod \"e2f51327-b54f-430d-8728-302b40279d68\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.312242 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-catalog-content\") pod \"e2f51327-b54f-430d-8728-302b40279d68\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.312328 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-utilities\") pod \"e2f51327-b54f-430d-8728-302b40279d68\" (UID: \"e2f51327-b54f-430d-8728-302b40279d68\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.313729 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-utilities" (OuterVolumeSpecName: "utilities") pod "e2f51327-b54f-430d-8728-302b40279d68" (UID: "e2f51327-b54f-430d-8728-302b40279d68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.319333 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f51327-b54f-430d-8728-302b40279d68-kube-api-access-hvfhv" (OuterVolumeSpecName: "kube-api-access-hvfhv") pod "e2f51327-b54f-430d-8728-302b40279d68" (UID: "e2f51327-b54f-430d-8728-302b40279d68"). InnerVolumeSpecName "kube-api-access-hvfhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.325722 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.332326 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.381477 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2f51327-b54f-430d-8728-302b40279d68" (UID: "e2f51327-b54f-430d-8728-302b40279d68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.384641 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.388930 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.413618 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-catalog-content\") pod \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421267 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-utilities\") pod \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421367 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99ntm\" (UniqueName: \"kubernetes.io/projected/8355e146-dafa-45db-85a5-b1534eeb6b53-kube-api-access-99ntm\") pod \"8355e146-dafa-45db-85a5-b1534eeb6b53\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421442 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-utilities\") pod \"8355e146-dafa-45db-85a5-b1534eeb6b53\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421484 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlxpv\" (UniqueName: \"kubernetes.io/projected/52a87d4f-2b9b-44c1-9457-cedcf68d8819-kube-api-access-nlxpv\") pod \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\" (UID: \"52a87d4f-2b9b-44c1-9457-cedcf68d8819\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421524 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-catalog-content\") pod \"8355e146-dafa-45db-85a5-b1534eeb6b53\" (UID: \"8355e146-dafa-45db-85a5-b1534eeb6b53\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421923 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421941 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvfhv\" (UniqueName: \"kubernetes.io/projected/e2f51327-b54f-430d-8728-302b40279d68-kube-api-access-hvfhv\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.421988 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2f51327-b54f-430d-8728-302b40279d68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.422294 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-utilities" (OuterVolumeSpecName: "utilities") pod "52a87d4f-2b9b-44c1-9457-cedcf68d8819" (UID: "52a87d4f-2b9b-44c1-9457-cedcf68d8819"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.426000 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8355e146-dafa-45db-85a5-b1534eeb6b53-kube-api-access-99ntm" (OuterVolumeSpecName: "kube-api-access-99ntm") pod "8355e146-dafa-45db-85a5-b1534eeb6b53" (UID: "8355e146-dafa-45db-85a5-b1534eeb6b53"). InnerVolumeSpecName "kube-api-access-99ntm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.426588 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52a87d4f-2b9b-44c1-9457-cedcf68d8819-kube-api-access-nlxpv" (OuterVolumeSpecName: "kube-api-access-nlxpv") pod "52a87d4f-2b9b-44c1-9457-cedcf68d8819" (UID: "52a87d4f-2b9b-44c1-9457-cedcf68d8819"). InnerVolumeSpecName "kube-api-access-nlxpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.434284 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-utilities" (OuterVolumeSpecName: "utilities") pod "8355e146-dafa-45db-85a5-b1534eeb6b53" (UID: "8355e146-dafa-45db-85a5-b1534eeb6b53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.464863 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52a87d4f-2b9b-44c1-9457-cedcf68d8819" (UID: "52a87d4f-2b9b-44c1-9457-cedcf68d8819"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.522821 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-catalog-content\") pod \"292243f2-7308-454f-8d48-a9b408fb2bd5\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.522915 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-utilities\") pod \"292243f2-7308-454f-8d48-a9b408fb2bd5\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.522963 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-operator-metrics\") pod \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.522990 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db2c9\" (UniqueName: \"kubernetes.io/projected/292243f2-7308-454f-8d48-a9b408fb2bd5-kube-api-access-db2c9\") pod \"292243f2-7308-454f-8d48-a9b408fb2bd5\" (UID: \"292243f2-7308-454f-8d48-a9b408fb2bd5\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.523018 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhwnm\" (UniqueName: \"kubernetes.io/projected/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-kube-api-access-qhwnm\") pod \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.523456 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-trusted-ca\") pod \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\" (UID: \"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a\") " Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.523790 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlxpv\" (UniqueName: \"kubernetes.io/projected/52a87d4f-2b9b-44c1-9457-cedcf68d8819-kube-api-access-nlxpv\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.523809 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.523819 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52a87d4f-2b9b-44c1-9457-cedcf68d8819-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.523828 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99ntm\" (UniqueName: \"kubernetes.io/projected/8355e146-dafa-45db-85a5-b1534eeb6b53-kube-api-access-99ntm\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.523836 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.524013 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" (UID: "e9f76302-9a17-4d43-91c8-ca18fcb6cc6a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.524506 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-utilities" (OuterVolumeSpecName: "utilities") pod "292243f2-7308-454f-8d48-a9b408fb2bd5" (UID: "292243f2-7308-454f-8d48-a9b408fb2bd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.525653 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/292243f2-7308-454f-8d48-a9b408fb2bd5-kube-api-access-db2c9" (OuterVolumeSpecName: "kube-api-access-db2c9") pod "292243f2-7308-454f-8d48-a9b408fb2bd5" (UID: "292243f2-7308-454f-8d48-a9b408fb2bd5"). InnerVolumeSpecName "kube-api-access-db2c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.526113 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-kube-api-access-qhwnm" (OuterVolumeSpecName: "kube-api-access-qhwnm") pod "e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" (UID: "e9f76302-9a17-4d43-91c8-ca18fcb6cc6a"). InnerVolumeSpecName "kube-api-access-qhwnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.526189 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" (UID: "e9f76302-9a17-4d43-91c8-ca18fcb6cc6a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.548146 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "292243f2-7308-454f-8d48-a9b408fb2bd5" (UID: "292243f2-7308-454f-8d48-a9b408fb2bd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.555799 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8355e146-dafa-45db-85a5-b1534eeb6b53" (UID: "8355e146-dafa-45db-85a5-b1534eeb6b53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.626951 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.626996 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/292243f2-7308-454f-8d48-a9b408fb2bd5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.627018 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.627035 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db2c9\" (UniqueName: \"kubernetes.io/projected/292243f2-7308-454f-8d48-a9b408fb2bd5-kube-api-access-db2c9\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.627096 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhwnm\" (UniqueName: \"kubernetes.io/projected/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-kube-api-access-qhwnm\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.627115 4697 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.627127 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8355e146-dafa-45db-85a5-b1534eeb6b53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.888555 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp2xk" event={"ID":"8355e146-dafa-45db-85a5-b1534eeb6b53","Type":"ContainerDied","Data":"bf1ee74837119e2e51a5116e53fcd5cf8b6228cf2e213a2fba387119b9b55fa5"} Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.888575 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp2xk" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.888627 4697 scope.go:117] "RemoveContainer" containerID="a982d6b11aacc38b17f892cb91f3564c0c80e1c70067f06acfb35c7b905d64fa" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.890849 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" event={"ID":"e9f76302-9a17-4d43-91c8-ca18fcb6cc6a","Type":"ContainerDied","Data":"64cc6208d47d4193953a07a33c649cc09dbb948feabe2035e5bcf729d9c08c37"} Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.890918 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9f8xv" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.918764 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbn5x" event={"ID":"292243f2-7308-454f-8d48-a9b408fb2bd5","Type":"ContainerDied","Data":"908b11b8ed1289d616a3403ad342113b42e1396d338bc4087afd6b2ebe54842c"} Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.918897 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbn5x" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.927221 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w4m7r" event={"ID":"e2f51327-b54f-430d-8728-302b40279d68","Type":"ContainerDied","Data":"540292d8a0ec28657ee310ee0c7a1df0b1e7b59b04a96c28ebd3824dd1991423"} Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.927338 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w4m7r" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.931099 4697 scope.go:117] "RemoveContainer" containerID="440b183ef7678c930e450e0a239aff77833099119a69716925bda97d7552ab39" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.931267 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9f8xv"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.935027 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9f8xv"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.936515 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxm8l" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.936632 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxm8l" event={"ID":"52a87d4f-2b9b-44c1-9457-cedcf68d8819","Type":"ContainerDied","Data":"c50e9c567aedcf7d9550aec693d557271269682e8a061bafeaacc70504cd10c0"} Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.947788 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xk"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.950800 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tp2xk"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.961341 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbn5x"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.967601 4697 scope.go:117] "RemoveContainer" containerID="c5987dbb66286df85405c5886dd118adb83ef37328ef9d80fd891e6ab6aa67e1" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.968000 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbn5x"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.978310 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w4m7r"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.988160 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w4m7r"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.988856 4697 scope.go:117] "RemoveContainer" containerID="1ce084e008b9cadd4c788d20c33c7a00239b5cfabade521c746edcaee4be6fef" Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.991921 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxm8l"] Jan 26 00:13:54 crc kubenswrapper[4697]: I0126 00:13:54.994861 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kxm8l"] Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.008299 4697 scope.go:117] "RemoveContainer" containerID="e41f785c476c55ebf45b5cd0a7e18a29d0af71c5e0f787457733da178572e811" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.021011 4697 scope.go:117] "RemoveContainer" containerID="9d5c261753be656f9e0748b4384942a591968edaf26925e897b7e4ae7207f3a4" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.035668 4697 scope.go:117] "RemoveContainer" containerID="ec20413bb31325e3bc59205f089fbf255d865fef6be5b7a4450e02b1238da462" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.050446 4697 scope.go:117] "RemoveContainer" containerID="efcb39628d70a560941d9f0100c54785197ff4d9085511ddaa57591f9ea4a972" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.066171 4697 scope.go:117] "RemoveContainer" containerID="99b70f9f24e6b59c5779af5c5c93b1726e29cca28480244c3a2ab4217949f679" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.081110 4697 scope.go:117] "RemoveContainer" containerID="e3e67df37a665f964f9b59f7f3f016ff8206b4702b3820a1625625dc2aa39877" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.096006 4697 scope.go:117] "RemoveContainer" containerID="db411f20868abaa79cb5cb4bb1d0a0b698b2497cf0a8c77e83347b9b85f867d5" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.119179 4697 scope.go:117] "RemoveContainer" containerID="2ae4ebb3471bded4d80b7cd81d8aacf1bb8f20359d9227cea5d27d17e3b28f0f" Jan 26 00:13:55 crc kubenswrapper[4697]: I0126 00:13:55.134050 4697 scope.go:117] "RemoveContainer" containerID="34861e47f8b18b9c34ba6432d8725d5ca460cfe4814df1877e75bea9f24e4bcf" Jan 26 00:13:56 crc kubenswrapper[4697]: I0126 00:13:56.667830 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" path="/var/lib/kubelet/pods/292243f2-7308-454f-8d48-a9b408fb2bd5/volumes" Jan 26 00:13:56 crc kubenswrapper[4697]: I0126 00:13:56.668960 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" path="/var/lib/kubelet/pods/52a87d4f-2b9b-44c1-9457-cedcf68d8819/volumes" Jan 26 00:13:56 crc kubenswrapper[4697]: I0126 00:13:56.669717 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" path="/var/lib/kubelet/pods/8355e146-dafa-45db-85a5-b1534eeb6b53/volumes" Jan 26 00:13:56 crc kubenswrapper[4697]: I0126 00:13:56.670935 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f51327-b54f-430d-8728-302b40279d68" path="/var/lib/kubelet/pods/e2f51327-b54f-430d-8728-302b40279d68/volumes" Jan 26 00:13:56 crc kubenswrapper[4697]: I0126 00:13:56.671710 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" path="/var/lib/kubelet/pods/e9f76302-9a17-4d43-91c8-ca18fcb6cc6a/volumes" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039162 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6qgbj"] Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039501 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039539 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039560 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039568 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039577 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039586 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039619 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039628 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039637 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039646 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039660 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039667 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039695 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039703 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039715 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039722 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039730 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039739 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039751 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039780 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039789 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039797 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039807 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039815 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039827 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039858 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="extract-content" Jan 26 00:13:57 crc kubenswrapper[4697]: E0126 00:13:57.039869 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.039876 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="extract-utilities" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.040035 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="52a87d4f-2b9b-44c1-9457-cedcf68d8819" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.040051 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.040060 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f76302-9a17-4d43-91c8-ca18fcb6cc6a" containerName="marketplace-operator" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.040097 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="292243f2-7308-454f-8d48-a9b408fb2bd5" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.040108 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="8355e146-dafa-45db-85a5-b1534eeb6b53" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.040121 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f51327-b54f-430d-8728-302b40279d68" containerName="registry-server" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.041471 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.046000 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.053057 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qgbj"] Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.163450 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e94cda-195b-4740-ae0d-fcdc027823b1-utilities\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.163500 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e94cda-195b-4740-ae0d-fcdc027823b1-catalog-content\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.163543 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7mkh\" (UniqueName: \"kubernetes.io/projected/79e94cda-195b-4740-ae0d-fcdc027823b1-kube-api-access-l7mkh\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.200788 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" podUID="4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" containerName="registry" containerID="cri-o://f2f14fa6175b289f5148ace418e34e560bdc1710b722e6e19563cd8c85c6cf06" gracePeriod=30 Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.237065 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bthp6"] Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.243962 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.245684 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bthp6"] Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.246361 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.264569 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e94cda-195b-4740-ae0d-fcdc027823b1-utilities\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.264623 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e94cda-195b-4740-ae0d-fcdc027823b1-catalog-content\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.264668 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7mkh\" (UniqueName: \"kubernetes.io/projected/79e94cda-195b-4740-ae0d-fcdc027823b1-kube-api-access-l7mkh\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.265392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e94cda-195b-4740-ae0d-fcdc027823b1-utilities\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.265412 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e94cda-195b-4740-ae0d-fcdc027823b1-catalog-content\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.289297 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7mkh\" (UniqueName: \"kubernetes.io/projected/79e94cda-195b-4740-ae0d-fcdc027823b1-kube-api-access-l7mkh\") pod \"community-operators-6qgbj\" (UID: \"79e94cda-195b-4740-ae0d-fcdc027823b1\") " pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.366200 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-catalog-content\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.366298 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-utilities\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.366357 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkbn\" (UniqueName: \"kubernetes.io/projected/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-kube-api-access-rwkbn\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.366486 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.468868 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-utilities\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.469286 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwkbn\" (UniqueName: \"kubernetes.io/projected/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-kube-api-access-rwkbn\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.469326 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-catalog-content\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.470253 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-catalog-content\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.470519 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-utilities\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.492856 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwkbn\" (UniqueName: \"kubernetes.io/projected/f069b1fd-5a99-4f17-bf0b-aa757f46a13a-kube-api-access-rwkbn\") pod \"certified-operators-bthp6\" (UID: \"f069b1fd-5a99-4f17-bf0b-aa757f46a13a\") " pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.564960 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.815715 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qgbj"] Jan 26 00:13:57 crc kubenswrapper[4697]: W0126 00:13:57.822813 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79e94cda_195b_4740_ae0d_fcdc027823b1.slice/crio-4faed7de8923931020f3a1df01347e313e0e2397508a10f818ac357726e27303 WatchSource:0}: Error finding container 4faed7de8923931020f3a1df01347e313e0e2397508a10f818ac357726e27303: Status 404 returned error can't find the container with id 4faed7de8923931020f3a1df01347e313e0e2397508a10f818ac357726e27303 Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.952144 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bthp6"] Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.972142 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bthp6" event={"ID":"f069b1fd-5a99-4f17-bf0b-aa757f46a13a","Type":"ContainerStarted","Data":"af9d8594d124db8949a9e61b3f19c76d45efec3be4f03c2218453d27479644ea"} Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.973699 4697 generic.go:334] "Generic (PLEG): container finished" podID="4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" containerID="f2f14fa6175b289f5148ace418e34e560bdc1710b722e6e19563cd8c85c6cf06" exitCode=0 Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.973808 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" event={"ID":"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64","Type":"ContainerDied","Data":"f2f14fa6175b289f5148ace418e34e560bdc1710b722e6e19563cd8c85c6cf06"} Jan 26 00:13:57 crc kubenswrapper[4697]: I0126 00:13:57.975828 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qgbj" event={"ID":"79e94cda-195b-4740-ae0d-fcdc027823b1","Type":"ContainerStarted","Data":"4faed7de8923931020f3a1df01347e313e0e2397508a10f818ac357726e27303"} Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.042703 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177428 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-tls\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177650 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177709 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-trusted-ca\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177755 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-ca-trust-extracted\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177807 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-bound-sa-token\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177850 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-installation-pull-secrets\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177889 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-certificates\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.177950 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqpf7\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-kube-api-access-jqpf7\") pod \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\" (UID: \"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64\") " Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.179103 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.179151 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.183256 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-kube-api-access-jqpf7" (OuterVolumeSpecName: "kube-api-access-jqpf7") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "kube-api-access-jqpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.183722 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.183985 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.190246 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.191374 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.198417 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" (UID: "4d46a4f5-ef4e-4ce6-b74b-33de51e67f64"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.279768 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqpf7\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-kube-api-access-jqpf7\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.279817 4697 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.279826 4697 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.279836 4697 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.279846 4697 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.279855 4697 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.279864 4697 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.982805 4697 generic.go:334] "Generic (PLEG): container finished" podID="f069b1fd-5a99-4f17-bf0b-aa757f46a13a" containerID="024435fe5c28d953c8a5045d361062d18c7b94bf78ac38657628c4bf0a1331d3" exitCode=0 Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.982882 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bthp6" event={"ID":"f069b1fd-5a99-4f17-bf0b-aa757f46a13a","Type":"ContainerDied","Data":"024435fe5c28d953c8a5045d361062d18c7b94bf78ac38657628c4bf0a1331d3"} Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.984909 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" event={"ID":"4d46a4f5-ef4e-4ce6-b74b-33de51e67f64","Type":"ContainerDied","Data":"d0264abca61c04c56bfe8eccceeaf8a17d370fab76961ac34d45f5b801b9bf6f"} Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.984951 4697 scope.go:117] "RemoveContainer" containerID="f2f14fa6175b289f5148ace418e34e560bdc1710b722e6e19563cd8c85c6cf06" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.984959 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xls7q" Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.989591 4697 generic.go:334] "Generic (PLEG): container finished" podID="79e94cda-195b-4740-ae0d-fcdc027823b1" containerID="d4b0958ca404f899bb42106af372a6a9a54bbf47668c33e04eb9924a94f9fa17" exitCode=0 Jan 26 00:13:58 crc kubenswrapper[4697]: I0126 00:13:58.989702 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qgbj" event={"ID":"79e94cda-195b-4740-ae0d-fcdc027823b1","Type":"ContainerDied","Data":"d4b0958ca404f899bb42106af372a6a9a54bbf47668c33e04eb9924a94f9fa17"} Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.021960 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xls7q"] Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.026120 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xls7q"] Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.436427 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-528hw"] Jan 26 00:13:59 crc kubenswrapper[4697]: E0126 00:13:59.436705 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" containerName="registry" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.436725 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" containerName="registry" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.436842 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" containerName="registry" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.437756 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.441755 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.451708 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-528hw"] Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.597772 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkfhq\" (UniqueName: \"kubernetes.io/projected/3854920a-fa2c-47a2-ada5-887c5c0d0019-kube-api-access-xkfhq\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.598114 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-catalog-content\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.598146 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-utilities\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.635869 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6jgjm"] Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.637138 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.640111 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.655354 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jgjm"] Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.698813 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-catalog-content\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.698864 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-utilities\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.698906 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkfhq\" (UniqueName: \"kubernetes.io/projected/3854920a-fa2c-47a2-ada5-887c5c0d0019-kube-api-access-xkfhq\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.699311 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-catalog-content\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.699397 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-utilities\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.718374 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkfhq\" (UniqueName: \"kubernetes.io/projected/3854920a-fa2c-47a2-ada5-887c5c0d0019-kube-api-access-xkfhq\") pod \"redhat-marketplace-528hw\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.761347 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.799889 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggzb\" (UniqueName: \"kubernetes.io/projected/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-kube-api-access-pggzb\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.799963 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-utilities\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.800002 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-catalog-content\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.900945 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-utilities\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.901311 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-catalog-content\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.901815 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-utilities\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.901899 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-catalog-content\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.902005 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggzb\" (UniqueName: \"kubernetes.io/projected/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-kube-api-access-pggzb\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.919935 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggzb\" (UniqueName: \"kubernetes.io/projected/e85e9cd3-9fce-43a7-9abc-a7883cd21c5c-kube-api-access-pggzb\") pod \"redhat-operators-6jgjm\" (UID: \"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c\") " pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:13:59 crc kubenswrapper[4697]: I0126 00:13:59.985037 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:14:00 crc kubenswrapper[4697]: I0126 00:14:00.000809 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qgbj" event={"ID":"79e94cda-195b-4740-ae0d-fcdc027823b1","Type":"ContainerStarted","Data":"9dadddcc051d0fbe98dde4e50e7551302f75df17515a5c46cc457055cc3f3b90"} Jan 26 00:14:00 crc kubenswrapper[4697]: I0126 00:14:00.004780 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bthp6" event={"ID":"f069b1fd-5a99-4f17-bf0b-aa757f46a13a","Type":"ContainerStarted","Data":"0a145d4675c0250721b150f2ed762621f07f97fb92d4cda81d8a794a5ec72602"} Jan 26 00:14:00 crc kubenswrapper[4697]: I0126 00:14:00.215660 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-528hw"] Jan 26 00:14:00 crc kubenswrapper[4697]: W0126 00:14:00.220314 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3854920a_fa2c_47a2_ada5_887c5c0d0019.slice/crio-43a2b3b8a71f8ecca652750f5533b2bf29db67752000455398a90821137d88c4 WatchSource:0}: Error finding container 43a2b3b8a71f8ecca652750f5533b2bf29db67752000455398a90821137d88c4: Status 404 returned error can't find the container with id 43a2b3b8a71f8ecca652750f5533b2bf29db67752000455398a90821137d88c4 Jan 26 00:14:00 crc kubenswrapper[4697]: I0126 00:14:00.381431 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6jgjm"] Jan 26 00:14:00 crc kubenswrapper[4697]: W0126 00:14:00.425900 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85e9cd3_9fce_43a7_9abc_a7883cd21c5c.slice/crio-1c28a1a7c56c48de63416bfa692536a14a6ca2598408700757477fdbf8aa490a WatchSource:0}: Error finding container 1c28a1a7c56c48de63416bfa692536a14a6ca2598408700757477fdbf8aa490a: Status 404 returned error can't find the container with id 1c28a1a7c56c48de63416bfa692536a14a6ca2598408700757477fdbf8aa490a Jan 26 00:14:00 crc kubenswrapper[4697]: I0126 00:14:00.667941 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d46a4f5-ef4e-4ce6-b74b-33de51e67f64" path="/var/lib/kubelet/pods/4d46a4f5-ef4e-4ce6-b74b-33de51e67f64/volumes" Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.028470 4697 generic.go:334] "Generic (PLEG): container finished" podID="79e94cda-195b-4740-ae0d-fcdc027823b1" containerID="9dadddcc051d0fbe98dde4e50e7551302f75df17515a5c46cc457055cc3f3b90" exitCode=0 Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.028540 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qgbj" event={"ID":"79e94cda-195b-4740-ae0d-fcdc027823b1","Type":"ContainerDied","Data":"9dadddcc051d0fbe98dde4e50e7551302f75df17515a5c46cc457055cc3f3b90"} Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.035114 4697 generic.go:334] "Generic (PLEG): container finished" podID="f069b1fd-5a99-4f17-bf0b-aa757f46a13a" containerID="0a145d4675c0250721b150f2ed762621f07f97fb92d4cda81d8a794a5ec72602" exitCode=0 Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.035436 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bthp6" event={"ID":"f069b1fd-5a99-4f17-bf0b-aa757f46a13a","Type":"ContainerDied","Data":"0a145d4675c0250721b150f2ed762621f07f97fb92d4cda81d8a794a5ec72602"} Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.037764 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jgjm" event={"ID":"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c","Type":"ContainerDied","Data":"98d2ca5584abc4b58313f970c3f70a178b77c434ef4eeb1797f172a466fd2800"} Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.038222 4697 generic.go:334] "Generic (PLEG): container finished" podID="e85e9cd3-9fce-43a7-9abc-a7883cd21c5c" containerID="98d2ca5584abc4b58313f970c3f70a178b77c434ef4eeb1797f172a466fd2800" exitCode=0 Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.038350 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jgjm" event={"ID":"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c","Type":"ContainerStarted","Data":"1c28a1a7c56c48de63416bfa692536a14a6ca2598408700757477fdbf8aa490a"} Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.040255 4697 generic.go:334] "Generic (PLEG): container finished" podID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerID="0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e" exitCode=0 Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.040279 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-528hw" event={"ID":"3854920a-fa2c-47a2-ada5-887c5c0d0019","Type":"ContainerDied","Data":"0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e"} Jan 26 00:14:01 crc kubenswrapper[4697]: I0126 00:14:01.040296 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-528hw" event={"ID":"3854920a-fa2c-47a2-ada5-887c5c0d0019","Type":"ContainerStarted","Data":"43a2b3b8a71f8ecca652750f5533b2bf29db67752000455398a90821137d88c4"} Jan 26 00:14:06 crc kubenswrapper[4697]: I0126 00:14:06.329154 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:14:06 crc kubenswrapper[4697]: I0126 00:14:06.329584 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.204561 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jgjm" event={"ID":"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c","Type":"ContainerStarted","Data":"9a8a43b7a0a31b90b3f142cec6c211e38f97d672ebdbcc9ef6ae218e4adfd8ac"} Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.206252 4697 generic.go:334] "Generic (PLEG): container finished" podID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerID="a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12" exitCode=0 Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.206325 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-528hw" event={"ID":"3854920a-fa2c-47a2-ada5-887c5c0d0019","Type":"ContainerDied","Data":"a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12"} Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.208591 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.209444 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qgbj" event={"ID":"79e94cda-195b-4740-ae0d-fcdc027823b1","Type":"ContainerStarted","Data":"c46a04d631b0a7fd4c4305745dce681db30efc2949102f263bfb47b83d4af033"} Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.216058 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bthp6" event={"ID":"f069b1fd-5a99-4f17-bf0b-aa757f46a13a","Type":"ContainerStarted","Data":"f88e7ff6947af77050e226c0747bd41939e2f58692ded67518345b70843958c8"} Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.252560 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6qgbj" podStartSLOduration=3.384812324 podStartE2EDuration="26.252541406s" podCreationTimestamp="2026-01-26 00:13:57 +0000 UTC" firstStartedPulling="2026-01-26 00:13:58.990863504 +0000 UTC m=+380.627640894" lastFinishedPulling="2026-01-26 00:14:21.858592586 +0000 UTC m=+403.495369976" observedRunningTime="2026-01-26 00:14:23.248005622 +0000 UTC m=+404.884783022" watchObservedRunningTime="2026-01-26 00:14:23.252541406 +0000 UTC m=+404.889318796" Jan 26 00:14:23 crc kubenswrapper[4697]: I0126 00:14:23.271141 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bthp6" podStartSLOduration=3.795598041 podStartE2EDuration="26.271122865s" podCreationTimestamp="2026-01-26 00:13:57 +0000 UTC" firstStartedPulling="2026-01-26 00:13:58.984781244 +0000 UTC m=+380.621558634" lastFinishedPulling="2026-01-26 00:14:21.460306068 +0000 UTC m=+403.097083458" observedRunningTime="2026-01-26 00:14:23.266641523 +0000 UTC m=+404.903418923" watchObservedRunningTime="2026-01-26 00:14:23.271122865 +0000 UTC m=+404.907900255" Jan 26 00:14:23 crc kubenswrapper[4697]: E0126 00:14:23.601913 4697 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode85e9cd3_9fce_43a7_9abc_a7883cd21c5c.slice/crio-conmon-9a8a43b7a0a31b90b3f142cec6c211e38f97d672ebdbcc9ef6ae218e4adfd8ac.scope\": RecentStats: unable to find data in memory cache]" Jan 26 00:14:24 crc kubenswrapper[4697]: I0126 00:14:24.224011 4697 generic.go:334] "Generic (PLEG): container finished" podID="e85e9cd3-9fce-43a7-9abc-a7883cd21c5c" containerID="9a8a43b7a0a31b90b3f142cec6c211e38f97d672ebdbcc9ef6ae218e4adfd8ac" exitCode=0 Jan 26 00:14:24 crc kubenswrapper[4697]: I0126 00:14:24.224339 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jgjm" event={"ID":"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c","Type":"ContainerDied","Data":"9a8a43b7a0a31b90b3f142cec6c211e38f97d672ebdbcc9ef6ae218e4adfd8ac"} Jan 26 00:14:25 crc kubenswrapper[4697]: I0126 00:14:25.231938 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6jgjm" event={"ID":"e85e9cd3-9fce-43a7-9abc-a7883cd21c5c","Type":"ContainerStarted","Data":"5fd96e5091b4922728534c1999e478d8a9e9457a4fadeb68cc422902b063f986"} Jan 26 00:14:25 crc kubenswrapper[4697]: I0126 00:14:25.233873 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-528hw" event={"ID":"3854920a-fa2c-47a2-ada5-887c5c0d0019","Type":"ContainerStarted","Data":"91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590"} Jan 26 00:14:25 crc kubenswrapper[4697]: I0126 00:14:25.252599 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6jgjm" podStartSLOduration=2.4644018069999998 podStartE2EDuration="26.252579679s" podCreationTimestamp="2026-01-26 00:13:59 +0000 UTC" firstStartedPulling="2026-01-26 00:14:01.038879804 +0000 UTC m=+382.675657204" lastFinishedPulling="2026-01-26 00:14:24.827057686 +0000 UTC m=+406.463835076" observedRunningTime="2026-01-26 00:14:25.249318133 +0000 UTC m=+406.886095543" watchObservedRunningTime="2026-01-26 00:14:25.252579679 +0000 UTC m=+406.889357069" Jan 26 00:14:25 crc kubenswrapper[4697]: I0126 00:14:25.269190 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-528hw" podStartSLOduration=2.979077657 podStartE2EDuration="26.2691743s" podCreationTimestamp="2026-01-26 00:13:59 +0000 UTC" firstStartedPulling="2026-01-26 00:14:01.041597914 +0000 UTC m=+382.678375304" lastFinishedPulling="2026-01-26 00:14:24.331694557 +0000 UTC m=+405.968471947" observedRunningTime="2026-01-26 00:14:25.264842462 +0000 UTC m=+406.901619842" watchObservedRunningTime="2026-01-26 00:14:25.2691743 +0000 UTC m=+406.905951690" Jan 26 00:14:27 crc kubenswrapper[4697]: I0126 00:14:27.367629 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:14:27 crc kubenswrapper[4697]: I0126 00:14:27.367984 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:14:27 crc kubenswrapper[4697]: I0126 00:14:27.408441 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:14:27 crc kubenswrapper[4697]: I0126 00:14:27.565335 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:14:27 crc kubenswrapper[4697]: I0126 00:14:27.565656 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:14:27 crc kubenswrapper[4697]: I0126 00:14:27.607596 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:14:28 crc kubenswrapper[4697]: I0126 00:14:28.298254 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bthp6" Jan 26 00:14:28 crc kubenswrapper[4697]: I0126 00:14:28.298884 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6qgbj" Jan 26 00:14:29 crc kubenswrapper[4697]: I0126 00:14:29.761305 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:14:29 crc kubenswrapper[4697]: I0126 00:14:29.761659 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:14:29 crc kubenswrapper[4697]: I0126 00:14:29.812964 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:14:29 crc kubenswrapper[4697]: I0126 00:14:29.986005 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:14:29 crc kubenswrapper[4697]: I0126 00:14:29.986520 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:14:30 crc kubenswrapper[4697]: I0126 00:14:30.297233 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:14:31 crc kubenswrapper[4697]: I0126 00:14:31.023962 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6jgjm" podUID="e85e9cd3-9fce-43a7-9abc-a7883cd21c5c" containerName="registry-server" probeResult="failure" output=< Jan 26 00:14:31 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:14:31 crc kubenswrapper[4697]: > Jan 26 00:14:36 crc kubenswrapper[4697]: I0126 00:14:36.328398 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:14:36 crc kubenswrapper[4697]: I0126 00:14:36.328774 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:14:36 crc kubenswrapper[4697]: I0126 00:14:36.328838 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:14:36 crc kubenswrapper[4697]: I0126 00:14:36.329559 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0190f13b753887868a356c904d8c8ead28f75b77c98a9b67db0b95f6c3108511"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:14:36 crc kubenswrapper[4697]: I0126 00:14:36.329656 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://0190f13b753887868a356c904d8c8ead28f75b77c98a9b67db0b95f6c3108511" gracePeriod=600 Jan 26 00:14:39 crc kubenswrapper[4697]: I0126 00:14:39.310017 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="0190f13b753887868a356c904d8c8ead28f75b77c98a9b67db0b95f6c3108511" exitCode=0 Jan 26 00:14:39 crc kubenswrapper[4697]: I0126 00:14:39.310125 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"0190f13b753887868a356c904d8c8ead28f75b77c98a9b67db0b95f6c3108511"} Jan 26 00:14:39 crc kubenswrapper[4697]: I0126 00:14:39.310462 4697 scope.go:117] "RemoveContainer" containerID="dde726d2c00ae2842d203896388bde59c0b395e12a9238183bd5bdaf6bdc3e98" Jan 26 00:14:40 crc kubenswrapper[4697]: I0126 00:14:40.024776 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:14:40 crc kubenswrapper[4697]: I0126 00:14:40.070677 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6jgjm" Jan 26 00:14:40 crc kubenswrapper[4697]: I0126 00:14:40.318282 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"ed731b9450168ff09b60b96e60d3ea86f0ad19bb9c61493c771bfbf93308c36f"} Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.169327 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl"] Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.171456 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.174660 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.178184 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl"] Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.179583 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.251394 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsvbg\" (UniqueName: \"kubernetes.io/projected/365cb074-ae03-4224-b290-4e0e2c0bd22a-kube-api-access-bsvbg\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.251468 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cb074-ae03-4224-b290-4e0e2c0bd22a-secret-volume\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.251501 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cb074-ae03-4224-b290-4e0e2c0bd22a-config-volume\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.353315 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsvbg\" (UniqueName: \"kubernetes.io/projected/365cb074-ae03-4224-b290-4e0e2c0bd22a-kube-api-access-bsvbg\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.353426 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cb074-ae03-4224-b290-4e0e2c0bd22a-secret-volume\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.353457 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cb074-ae03-4224-b290-4e0e2c0bd22a-config-volume\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.354505 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cb074-ae03-4224-b290-4e0e2c0bd22a-config-volume\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.366043 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cb074-ae03-4224-b290-4e0e2c0bd22a-secret-volume\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.368811 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsvbg\" (UniqueName: \"kubernetes.io/projected/365cb074-ae03-4224-b290-4e0e2c0bd22a-kube-api-access-bsvbg\") pod \"collect-profiles-29489775-k5xrl\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.491135 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:00 crc kubenswrapper[4697]: I0126 00:15:00.689060 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl"] Jan 26 00:15:01 crc kubenswrapper[4697]: I0126 00:15:01.450223 4697 generic.go:334] "Generic (PLEG): container finished" podID="365cb074-ae03-4224-b290-4e0e2c0bd22a" containerID="5ff79504bfaca4f9165197c4d08f9340f402df266911cd5c7cc06f863743f2fa" exitCode=0 Jan 26 00:15:01 crc kubenswrapper[4697]: I0126 00:15:01.450324 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" event={"ID":"365cb074-ae03-4224-b290-4e0e2c0bd22a","Type":"ContainerDied","Data":"5ff79504bfaca4f9165197c4d08f9340f402df266911cd5c7cc06f863743f2fa"} Jan 26 00:15:01 crc kubenswrapper[4697]: I0126 00:15:01.450426 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" event={"ID":"365cb074-ae03-4224-b290-4e0e2c0bd22a","Type":"ContainerStarted","Data":"418b4b04fda58d73ce4e0e3c3b183852012bdfb756c82b380abdee43c6863e87"} Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.664037 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.683300 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cb074-ae03-4224-b290-4e0e2c0bd22a-config-volume\") pod \"365cb074-ae03-4224-b290-4e0e2c0bd22a\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.683380 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsvbg\" (UniqueName: \"kubernetes.io/projected/365cb074-ae03-4224-b290-4e0e2c0bd22a-kube-api-access-bsvbg\") pod \"365cb074-ae03-4224-b290-4e0e2c0bd22a\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.683549 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cb074-ae03-4224-b290-4e0e2c0bd22a-secret-volume\") pod \"365cb074-ae03-4224-b290-4e0e2c0bd22a\" (UID: \"365cb074-ae03-4224-b290-4e0e2c0bd22a\") " Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.683908 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365cb074-ae03-4224-b290-4e0e2c0bd22a-config-volume" (OuterVolumeSpecName: "config-volume") pod "365cb074-ae03-4224-b290-4e0e2c0bd22a" (UID: "365cb074-ae03-4224-b290-4e0e2c0bd22a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.688926 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365cb074-ae03-4224-b290-4e0e2c0bd22a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "365cb074-ae03-4224-b290-4e0e2c0bd22a" (UID: "365cb074-ae03-4224-b290-4e0e2c0bd22a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.688949 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365cb074-ae03-4224-b290-4e0e2c0bd22a-kube-api-access-bsvbg" (OuterVolumeSpecName: "kube-api-access-bsvbg") pod "365cb074-ae03-4224-b290-4e0e2c0bd22a" (UID: "365cb074-ae03-4224-b290-4e0e2c0bd22a"). InnerVolumeSpecName "kube-api-access-bsvbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.784959 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/365cb074-ae03-4224-b290-4e0e2c0bd22a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.785015 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365cb074-ae03-4224-b290-4e0e2c0bd22a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:02 crc kubenswrapper[4697]: I0126 00:15:02.785029 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsvbg\" (UniqueName: \"kubernetes.io/projected/365cb074-ae03-4224-b290-4e0e2c0bd22a-kube-api-access-bsvbg\") on node \"crc\" DevicePath \"\"" Jan 26 00:15:03 crc kubenswrapper[4697]: I0126 00:15:03.464026 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" event={"ID":"365cb074-ae03-4224-b290-4e0e2c0bd22a","Type":"ContainerDied","Data":"418b4b04fda58d73ce4e0e3c3b183852012bdfb756c82b380abdee43c6863e87"} Jan 26 00:15:03 crc kubenswrapper[4697]: I0126 00:15:03.464331 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="418b4b04fda58d73ce4e0e3c3b183852012bdfb756c82b380abdee43c6863e87" Jan 26 00:15:03 crc kubenswrapper[4697]: I0126 00:15:03.464109 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489775-k5xrl" Jan 26 00:15:38 crc kubenswrapper[4697]: I0126 00:15:38.859797 4697 scope.go:117] "RemoveContainer" containerID="cee2ec6ca2169fb278c5ca4c0da6875fef00931acf272aeea9138cf10fa48740" Jan 26 00:17:06 crc kubenswrapper[4697]: I0126 00:17:06.329168 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:06 crc kubenswrapper[4697]: I0126 00:17:06.329738 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:36 crc kubenswrapper[4697]: I0126 00:17:36.329113 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:17:36 crc kubenswrapper[4697]: I0126 00:17:36.329524 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:17:38 crc kubenswrapper[4697]: I0126 00:17:38.898560 4697 scope.go:117] "RemoveContainer" containerID="0c4434458cbeb2495b9a3bf1dd8ec7ed5e7177302f27efe8107ca8e584b02462" Jan 26 00:17:38 crc kubenswrapper[4697]: I0126 00:17:38.939354 4697 scope.go:117] "RemoveContainer" containerID="208d107e9a517caef047f80c124ed9a6a580da39047e850a37f81cc26cf110e4" Jan 26 00:18:06 crc kubenswrapper[4697]: I0126 00:18:06.329129 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:18:06 crc kubenswrapper[4697]: I0126 00:18:06.329681 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:18:06 crc kubenswrapper[4697]: I0126 00:18:06.329752 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:18:06 crc kubenswrapper[4697]: I0126 00:18:06.330471 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed731b9450168ff09b60b96e60d3ea86f0ad19bb9c61493c771bfbf93308c36f"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:18:06 crc kubenswrapper[4697]: I0126 00:18:06.330539 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://ed731b9450168ff09b60b96e60d3ea86f0ad19bb9c61493c771bfbf93308c36f" gracePeriod=600 Jan 26 00:18:07 crc kubenswrapper[4697]: I0126 00:18:07.457863 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="ed731b9450168ff09b60b96e60d3ea86f0ad19bb9c61493c771bfbf93308c36f" exitCode=0 Jan 26 00:18:07 crc kubenswrapper[4697]: I0126 00:18:07.457941 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"ed731b9450168ff09b60b96e60d3ea86f0ad19bb9c61493c771bfbf93308c36f"} Jan 26 00:18:07 crc kubenswrapper[4697]: I0126 00:18:07.458420 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"ed5dd1945eec0f6d970778673fb89939e592f26c1f170f55c1d612f6dec2ea84"} Jan 26 00:18:07 crc kubenswrapper[4697]: I0126 00:18:07.458439 4697 scope.go:117] "RemoveContainer" containerID="0190f13b753887868a356c904d8c8ead28f75b77c98a9b67db0b95f6c3108511" Jan 26 00:19:28 crc kubenswrapper[4697]: I0126 00:19:28.774235 4697 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.208816 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h7x5s"] Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.209910 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-controller" containerID="cri-o://2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.210390 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="sbdb" containerID="cri-o://b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.210596 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="nbdb" containerID="cri-o://0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.210646 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="northd" containerID="cri-o://1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.210686 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.210730 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-node" containerID="cri-o://5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.210767 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-acl-logging" containerID="cri-o://f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.252014 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" containerID="cri-o://7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" gracePeriod=30 Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.565518 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/2.log" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.567610 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovn-acl-logging/0.log" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.568089 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovn-controller/0.log" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.568556 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.627464 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6rmzr"] Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628302 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kubecfg-setup" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628328 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kubecfg-setup" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628340 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628347 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628355 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628361 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628370 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="northd" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628379 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="northd" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628391 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-acl-logging" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628398 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-acl-logging" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628409 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="nbdb" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628419 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="nbdb" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628433 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365cb074-ae03-4224-b290-4e0e2c0bd22a" containerName="collect-profiles" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628440 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="365cb074-ae03-4224-b290-4e0e2c0bd22a" containerName="collect-profiles" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628452 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-node" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628459 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-node" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628468 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628475 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628486 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628494 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628504 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628511 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628521 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="sbdb" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628527 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="sbdb" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628653 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="nbdb" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628666 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628673 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628680 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="northd" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628688 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="365cb074-ae03-4224-b290-4e0e2c0bd22a" containerName="collect-profiles" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628699 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628709 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovn-acl-logging" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628719 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="kube-rbac-proxy-node" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628729 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628739 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628749 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="sbdb" Jan 26 00:19:51 crc kubenswrapper[4697]: E0126 00:19:51.628851 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628859 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.628973 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerName="ovnkube-controller" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.630698 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707315 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-env-overrides\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707409 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtbg8\" (UniqueName: \"kubernetes.io/projected/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-kube-api-access-vtbg8\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707441 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-systemd-units\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707462 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-kubelet\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707485 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707509 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-log-socket\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707532 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-ovn-kubernetes\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707562 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-etc-openvswitch\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707585 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-slash\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707579 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707609 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-openvswitch\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707640 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707665 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707690 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-var-lib-openvswitch\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707711 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-systemd\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707728 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-node-log\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707750 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-script-lib\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707775 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707790 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-ovn\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707811 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-netns\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707843 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-netd\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707867 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-bin\") pod \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\" (UID: \"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4\") " Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707667 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707786 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707816 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707838 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-log-socket" (OuterVolumeSpecName: "log-socket") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707863 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708013 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707886 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-node-log" (OuterVolumeSpecName: "node-log") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.707964 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-slash" (OuterVolumeSpecName: "host-slash") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708043 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708088 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708089 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708146 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708273 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708431 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708662 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708786 4697 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708798 4697 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708807 4697 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708815 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708823 4697 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708831 4697 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708839 4697 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708848 4697 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708855 4697 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708865 4697 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708873 4697 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708881 4697 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708889 4697 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708898 4697 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708905 4697 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708913 4697 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.708921 4697 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.713244 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-kube-api-access-vtbg8" (OuterVolumeSpecName: "kube-api-access-vtbg8") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "kube-api-access-vtbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.713494 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.723393 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" (UID: "9b97fcec-14c2-49b1-bdc5-762e1b42d7a4"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809473 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-cni-bin\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809524 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-ovn\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809541 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-run-netns\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809568 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-ovnkube-script-lib\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809602 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809722 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809794 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-etc-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809849 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18d9fb86-a56e-46be-beed-51882f079248-ovn-node-metrics-cert\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809903 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809925 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-systemd-units\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809958 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-node-log\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.809985 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-env-overrides\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810020 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-log-socket\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810047 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-cni-netd\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810310 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-systemd\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810342 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-slash\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810357 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-ovnkube-config\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810376 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5mr\" (UniqueName: \"kubernetes.io/projected/18d9fb86-a56e-46be-beed-51882f079248-kube-api-access-vf5mr\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810396 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-kubelet\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810457 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-var-lib-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810552 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtbg8\" (UniqueName: \"kubernetes.io/projected/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-kube-api-access-vtbg8\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810570 4697 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.810582 4697 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911314 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-cni-netd\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911375 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-systemd\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911418 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-slash\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911441 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-ovnkube-config\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911465 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5mr\" (UniqueName: \"kubernetes.io/projected/18d9fb86-a56e-46be-beed-51882f079248-kube-api-access-vf5mr\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911488 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-kubelet\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911511 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-var-lib-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911534 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-cni-bin\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911530 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-cni-netd\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911608 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-ovn\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911563 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-ovn\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911662 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-cni-bin\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911677 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-var-lib-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911712 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-run-netns\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911700 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-kubelet\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911711 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-systemd\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911680 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-run-netns\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911604 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-slash\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911803 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911839 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-ovnkube-script-lib\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911872 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911900 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911915 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-etc-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911937 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-etc-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911949 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18d9fb86-a56e-46be-beed-51882f079248-ovn-node-metrics-cert\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911969 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-host-run-ovn-kubernetes\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.911977 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912000 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-systemd-units\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912017 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-node-log\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912029 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-run-openvswitch\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912037 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-env-overrides\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912192 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-log-socket\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912210 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-systemd-units\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912245 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-node-log\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912338 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18d9fb86-a56e-46be-beed-51882f079248-log-socket\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912500 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-ovnkube-config\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.912913 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-ovnkube-script-lib\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.913592 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18d9fb86-a56e-46be-beed-51882f079248-env-overrides\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.918544 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18d9fb86-a56e-46be-beed-51882f079248-ovn-node-metrics-cert\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.930047 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5mr\" (UniqueName: \"kubernetes.io/projected/18d9fb86-a56e-46be-beed-51882f079248-kube-api-access-vf5mr\") pod \"ovnkube-node-6rmzr\" (UID: \"18d9fb86-a56e-46be-beed-51882f079248\") " pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:51 crc kubenswrapper[4697]: I0126 00:19:51.950006 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.055684 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"53402ae2c20bb50dca2a8606619ca5132273543865fcacdd3479cd67af63c5a8"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.057875 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bjlq7_638a78f4-bdb3-4d78-8faf-b4bc299717d2/kube-multus/1.log" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.059198 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bjlq7_638a78f4-bdb3-4d78-8faf-b4bc299717d2/kube-multus/0.log" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.059266 4697 generic.go:334] "Generic (PLEG): container finished" podID="638a78f4-bdb3-4d78-8faf-b4bc299717d2" containerID="826e1d4598d9301073e86a13701fc4475d515c4911462d2d7299a6f7fdec3ee5" exitCode=2 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.059352 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bjlq7" event={"ID":"638a78f4-bdb3-4d78-8faf-b4bc299717d2","Type":"ContainerDied","Data":"826e1d4598d9301073e86a13701fc4475d515c4911462d2d7299a6f7fdec3ee5"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.059398 4697 scope.go:117] "RemoveContainer" containerID="1e028e8a4051048fccc0ecd4f5ab22646cc5565d53374aa73d4ab4e5d11b5e69" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.059960 4697 scope.go:117] "RemoveContainer" containerID="826e1d4598d9301073e86a13701fc4475d515c4911462d2d7299a6f7fdec3ee5" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.071794 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovnkube-controller/2.log" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.085650 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovn-acl-logging/0.log" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.086913 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7x5s_9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/ovn-controller/0.log" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087653 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" exitCode=0 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087681 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" exitCode=0 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087692 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" exitCode=0 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087701 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" exitCode=0 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087709 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" exitCode=0 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087718 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" exitCode=0 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087726 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" exitCode=143 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087735 4697 generic.go:334] "Generic (PLEG): container finished" podID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" containerID="2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" exitCode=143 Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087785 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087843 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087860 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087970 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.087989 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088007 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088019 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088034 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088052 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088060 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088088 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088095 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088104 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088113 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088122 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088130 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088137 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088146 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088157 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088164 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088171 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088177 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088183 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088190 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088196 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088202 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088209 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088215 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088223 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088233 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088239 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088246 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088252 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088259 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088265 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088272 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088278 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088285 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088292 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088300 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7x5s" event={"ID":"9b97fcec-14c2-49b1-bdc5-762e1b42d7a4","Type":"ContainerDied","Data":"054ec7f0069becfbbacc72cfc614b256b40baa10bbcb623248886d1073a64cc0"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088311 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088324 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088331 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088337 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088343 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088350 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088356 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088362 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088371 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.088378 4697 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.090570 4697 scope.go:117] "RemoveContainer" containerID="7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.111502 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.150420 4697 scope.go:117] "RemoveContainer" containerID="b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.170871 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h7x5s"] Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.181579 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h7x5s"] Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.227996 4697 scope.go:117] "RemoveContainer" containerID="0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.243677 4697 scope.go:117] "RemoveContainer" containerID="1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.260322 4697 scope.go:117] "RemoveContainer" containerID="d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.278584 4697 scope.go:117] "RemoveContainer" containerID="5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.300181 4697 scope.go:117] "RemoveContainer" containerID="f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.323500 4697 scope.go:117] "RemoveContainer" containerID="2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.364101 4697 scope.go:117] "RemoveContainer" containerID="bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.382234 4697 scope.go:117] "RemoveContainer" containerID="7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.382816 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": container with ID starting with 7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52 not found: ID does not exist" containerID="7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.382892 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} err="failed to get container status \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": rpc error: code = NotFound desc = could not find container \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": container with ID starting with 7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.382933 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.383512 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": container with ID starting with 9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624 not found: ID does not exist" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.383539 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} err="failed to get container status \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": rpc error: code = NotFound desc = could not find container \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": container with ID starting with 9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.383554 4697 scope.go:117] "RemoveContainer" containerID="b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.383850 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": container with ID starting with b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a not found: ID does not exist" containerID="b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.383911 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} err="failed to get container status \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": rpc error: code = NotFound desc = could not find container \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": container with ID starting with b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.383964 4697 scope.go:117] "RemoveContainer" containerID="0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.384297 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": container with ID starting with 0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343 not found: ID does not exist" containerID="0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.384331 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} err="failed to get container status \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": rpc error: code = NotFound desc = could not find container \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": container with ID starting with 0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.384350 4697 scope.go:117] "RemoveContainer" containerID="1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.384668 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": container with ID starting with 1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c not found: ID does not exist" containerID="1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.384713 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} err="failed to get container status \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": rpc error: code = NotFound desc = could not find container \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": container with ID starting with 1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.384743 4697 scope.go:117] "RemoveContainer" containerID="d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.385029 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": container with ID starting with d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b not found: ID does not exist" containerID="d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.385058 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} err="failed to get container status \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": rpc error: code = NotFound desc = could not find container \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": container with ID starting with d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.385089 4697 scope.go:117] "RemoveContainer" containerID="5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.385485 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": container with ID starting with 5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4 not found: ID does not exist" containerID="5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.385548 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} err="failed to get container status \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": rpc error: code = NotFound desc = could not find container \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": container with ID starting with 5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.385593 4697 scope.go:117] "RemoveContainer" containerID="f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.386146 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": container with ID starting with f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3 not found: ID does not exist" containerID="f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.386176 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} err="failed to get container status \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": rpc error: code = NotFound desc = could not find container \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": container with ID starting with f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.386197 4697 scope.go:117] "RemoveContainer" containerID="2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.386493 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": container with ID starting with 2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df not found: ID does not exist" containerID="2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.386520 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} err="failed to get container status \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": rpc error: code = NotFound desc = could not find container \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": container with ID starting with 2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.386537 4697 scope.go:117] "RemoveContainer" containerID="bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2" Jan 26 00:19:52 crc kubenswrapper[4697]: E0126 00:19:52.386907 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": container with ID starting with bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2 not found: ID does not exist" containerID="bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.386969 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} err="failed to get container status \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": rpc error: code = NotFound desc = could not find container \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": container with ID starting with bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.387006 4697 scope.go:117] "RemoveContainer" containerID="7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.387623 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} err="failed to get container status \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": rpc error: code = NotFound desc = could not find container \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": container with ID starting with 7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.387651 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388021 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} err="failed to get container status \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": rpc error: code = NotFound desc = could not find container \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": container with ID starting with 9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388050 4697 scope.go:117] "RemoveContainer" containerID="b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388375 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} err="failed to get container status \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": rpc error: code = NotFound desc = could not find container \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": container with ID starting with b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388398 4697 scope.go:117] "RemoveContainer" containerID="0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388622 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} err="failed to get container status \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": rpc error: code = NotFound desc = could not find container \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": container with ID starting with 0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388644 4697 scope.go:117] "RemoveContainer" containerID="1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388878 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} err="failed to get container status \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": rpc error: code = NotFound desc = could not find container \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": container with ID starting with 1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.388899 4697 scope.go:117] "RemoveContainer" containerID="d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389152 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} err="failed to get container status \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": rpc error: code = NotFound desc = could not find container \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": container with ID starting with d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389179 4697 scope.go:117] "RemoveContainer" containerID="5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389383 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} err="failed to get container status \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": rpc error: code = NotFound desc = could not find container \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": container with ID starting with 5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389404 4697 scope.go:117] "RemoveContainer" containerID="f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389593 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} err="failed to get container status \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": rpc error: code = NotFound desc = could not find container \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": container with ID starting with f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389612 4697 scope.go:117] "RemoveContainer" containerID="2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389836 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} err="failed to get container status \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": rpc error: code = NotFound desc = could not find container \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": container with ID starting with 2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.389858 4697 scope.go:117] "RemoveContainer" containerID="bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.390150 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} err="failed to get container status \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": rpc error: code = NotFound desc = could not find container \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": container with ID starting with bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.390206 4697 scope.go:117] "RemoveContainer" containerID="7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.390635 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} err="failed to get container status \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": rpc error: code = NotFound desc = could not find container \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": container with ID starting with 7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.390659 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.391186 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} err="failed to get container status \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": rpc error: code = NotFound desc = could not find container \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": container with ID starting with 9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.391223 4697 scope.go:117] "RemoveContainer" containerID="b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.391520 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} err="failed to get container status \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": rpc error: code = NotFound desc = could not find container \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": container with ID starting with b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.391546 4697 scope.go:117] "RemoveContainer" containerID="0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.391952 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} err="failed to get container status \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": rpc error: code = NotFound desc = could not find container \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": container with ID starting with 0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.391982 4697 scope.go:117] "RemoveContainer" containerID="1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.392354 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} err="failed to get container status \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": rpc error: code = NotFound desc = could not find container \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": container with ID starting with 1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.392380 4697 scope.go:117] "RemoveContainer" containerID="d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.392696 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} err="failed to get container status \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": rpc error: code = NotFound desc = could not find container \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": container with ID starting with d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.392740 4697 scope.go:117] "RemoveContainer" containerID="5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.393215 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} err="failed to get container status \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": rpc error: code = NotFound desc = could not find container \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": container with ID starting with 5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.393251 4697 scope.go:117] "RemoveContainer" containerID="f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.393610 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} err="failed to get container status \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": rpc error: code = NotFound desc = could not find container \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": container with ID starting with f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.393636 4697 scope.go:117] "RemoveContainer" containerID="2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.393935 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} err="failed to get container status \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": rpc error: code = NotFound desc = could not find container \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": container with ID starting with 2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.393956 4697 scope.go:117] "RemoveContainer" containerID="bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.394369 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} err="failed to get container status \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": rpc error: code = NotFound desc = could not find container \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": container with ID starting with bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.394395 4697 scope.go:117] "RemoveContainer" containerID="7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.394668 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52"} err="failed to get container status \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": rpc error: code = NotFound desc = could not find container \"7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52\": container with ID starting with 7862787d3f1445fdd2889f5551da2f33aeb257f1872ec72b9717f922feabaf52 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.394753 4697 scope.go:117] "RemoveContainer" containerID="9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.395106 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624"} err="failed to get container status \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": rpc error: code = NotFound desc = could not find container \"9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624\": container with ID starting with 9d256c6ef6a12b60db977ac01e99843ebac09cd4a59b6bc96954936803cbb624 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.395161 4697 scope.go:117] "RemoveContainer" containerID="b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.395396 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a"} err="failed to get container status \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": rpc error: code = NotFound desc = could not find container \"b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a\": container with ID starting with b39bcb2ff81858d1f7b259ebc0af2a10e2c446d0b82e4876326c5dcb9602443a not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.395428 4697 scope.go:117] "RemoveContainer" containerID="0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.395688 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343"} err="failed to get container status \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": rpc error: code = NotFound desc = could not find container \"0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343\": container with ID starting with 0bf86041df0fd9b3c40429714d007ee1320d3947bd881735ca12df52b85da343 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.395719 4697 scope.go:117] "RemoveContainer" containerID="1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396027 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c"} err="failed to get container status \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": rpc error: code = NotFound desc = could not find container \"1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c\": container with ID starting with 1c3e0b90f718d43b61392584cd5c74a201a4ba3836a95d388f6956235eefd06c not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396062 4697 scope.go:117] "RemoveContainer" containerID="d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396361 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b"} err="failed to get container status \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": rpc error: code = NotFound desc = could not find container \"d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b\": container with ID starting with d4edfe450284f2126983bc3f685f2721336c1b521d2718cc3c34e8a958c6512b not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396385 4697 scope.go:117] "RemoveContainer" containerID="5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396617 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4"} err="failed to get container status \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": rpc error: code = NotFound desc = could not find container \"5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4\": container with ID starting with 5673e35b908fe2340003538bdff5d89b968fddee8b82eecc4985cc905fc9c8b4 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396644 4697 scope.go:117] "RemoveContainer" containerID="f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396902 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3"} err="failed to get container status \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": rpc error: code = NotFound desc = could not find container \"f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3\": container with ID starting with f5d52ac75617cfdc5915bd9f21c09a2cb265938ce8e2ff9f5658d34c75d650c3 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.396929 4697 scope.go:117] "RemoveContainer" containerID="2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.397173 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df"} err="failed to get container status \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": rpc error: code = NotFound desc = could not find container \"2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df\": container with ID starting with 2b91bd5cf45009f90d70b11975de55c883e06df244d645420179f8b985cf16df not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.397198 4697 scope.go:117] "RemoveContainer" containerID="bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.397429 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2"} err="failed to get container status \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": rpc error: code = NotFound desc = could not find container \"bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2\": container with ID starting with bee1e892c88803ab055da67da12b614b4f5fe9b5baf18604c126a311ed2f9ff2 not found: ID does not exist" Jan 26 00:19:52 crc kubenswrapper[4697]: I0126 00:19:52.668026 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b97fcec-14c2-49b1-bdc5-762e1b42d7a4" path="/var/lib/kubelet/pods/9b97fcec-14c2-49b1-bdc5-762e1b42d7a4/volumes" Jan 26 00:19:53 crc kubenswrapper[4697]: I0126 00:19:53.096539 4697 generic.go:334] "Generic (PLEG): container finished" podID="18d9fb86-a56e-46be-beed-51882f079248" containerID="f51e023f5fe6ea343d4cb492eccd314802930e98fc513d9565c40450e04a535a" exitCode=0 Jan 26 00:19:53 crc kubenswrapper[4697]: I0126 00:19:53.096598 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerDied","Data":"f51e023f5fe6ea343d4cb492eccd314802930e98fc513d9565c40450e04a535a"} Jan 26 00:19:53 crc kubenswrapper[4697]: I0126 00:19:53.098895 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-bjlq7_638a78f4-bdb3-4d78-8faf-b4bc299717d2/kube-multus/1.log" Jan 26 00:19:53 crc kubenswrapper[4697]: I0126 00:19:53.099055 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-bjlq7" event={"ID":"638a78f4-bdb3-4d78-8faf-b4bc299717d2","Type":"ContainerStarted","Data":"fffe1923c5f94f64536604247f7201acb0dea2ece6a10e76aeff7029c71f4e64"} Jan 26 00:19:54 crc kubenswrapper[4697]: I0126 00:19:54.108218 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"939b11f77a4c51e1471118f420998cc00f0c043f53367aba22386a75dd2aebd6"} Jan 26 00:19:54 crc kubenswrapper[4697]: I0126 00:19:54.108797 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"1a4a2fc1bead30b6a2fb63a644cc27f05c666f0846396f9ee74fba29d995818d"} Jan 26 00:19:54 crc kubenswrapper[4697]: I0126 00:19:54.108811 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"6a5169f42f6fafe57249136eafcf184d46a35dbfde081b6dc8732218625b94c5"} Jan 26 00:19:54 crc kubenswrapper[4697]: I0126 00:19:54.108822 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"9b3452ae8bdb5a71a6064370839be3c50644cb65aaad3f1b178c0ea06313252c"} Jan 26 00:19:55 crc kubenswrapper[4697]: I0126 00:19:55.117582 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"dee6d726c9369e73feb67e378e784f25e6bfdcfef8756b1d2fc3f9fa4bb1f069"} Jan 26 00:19:55 crc kubenswrapper[4697]: I0126 00:19:55.117642 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"0d6c3614991bd601ed1fec78513b339f2df94a10657168329900ebfbbf53675f"} Jan 26 00:19:57 crc kubenswrapper[4697]: I0126 00:19:57.133403 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"36213a8ce64c2162045aa4a5c74d66bf8a25d380fee3674e630a31f348b80db4"} Jan 26 00:20:04 crc kubenswrapper[4697]: I0126 00:20:04.182594 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" event={"ID":"18d9fb86-a56e-46be-beed-51882f079248","Type":"ContainerStarted","Data":"06f4e933ccbb54130a55f07d2dc99ce3fd3f418b7e161281c383f84c6f13a202"} Jan 26 00:20:05 crc kubenswrapper[4697]: I0126 00:20:05.187967 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:20:05 crc kubenswrapper[4697]: I0126 00:20:05.188331 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:20:05 crc kubenswrapper[4697]: I0126 00:20:05.216240 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:20:05 crc kubenswrapper[4697]: I0126 00:20:05.257377 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" podStartSLOduration=14.257357409 podStartE2EDuration="14.257357409s" podCreationTimestamp="2026-01-26 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:20:05.226711068 +0000 UTC m=+746.863488468" watchObservedRunningTime="2026-01-26 00:20:05.257357409 +0000 UTC m=+746.894134799" Jan 26 00:20:06 crc kubenswrapper[4697]: I0126 00:20:06.193101 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:20:06 crc kubenswrapper[4697]: I0126 00:20:06.219293 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:20:06 crc kubenswrapper[4697]: I0126 00:20:06.328623 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:06 crc kubenswrapper[4697]: I0126 00:20:06.328733 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:20:08 crc kubenswrapper[4697]: I0126 00:20:08.234422 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6rmzr" Jan 26 00:20:36 crc kubenswrapper[4697]: I0126 00:20:36.328730 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:20:36 crc kubenswrapper[4697]: I0126 00:20:36.329541 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:21:06 crc kubenswrapper[4697]: I0126 00:21:06.328915 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:21:06 crc kubenswrapper[4697]: I0126 00:21:06.329913 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:21:06 crc kubenswrapper[4697]: I0126 00:21:06.329996 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:21:06 crc kubenswrapper[4697]: I0126 00:21:06.331002 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed5dd1945eec0f6d970778673fb89939e592f26c1f170f55c1d612f6dec2ea84"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:21:06 crc kubenswrapper[4697]: I0126 00:21:06.331097 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://ed5dd1945eec0f6d970778673fb89939e592f26c1f170f55c1d612f6dec2ea84" gracePeriod=600 Jan 26 00:21:07 crc kubenswrapper[4697]: I0126 00:21:07.523791 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="ed5dd1945eec0f6d970778673fb89939e592f26c1f170f55c1d612f6dec2ea84" exitCode=0 Jan 26 00:21:07 crc kubenswrapper[4697]: I0126 00:21:07.523856 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"ed5dd1945eec0f6d970778673fb89939e592f26c1f170f55c1d612f6dec2ea84"} Jan 26 00:21:07 crc kubenswrapper[4697]: I0126 00:21:07.524060 4697 scope.go:117] "RemoveContainer" containerID="ed731b9450168ff09b60b96e60d3ea86f0ad19bb9c61493c771bfbf93308c36f" Jan 26 00:21:08 crc kubenswrapper[4697]: I0126 00:21:08.531920 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"0256a7ce9795310ac1a75ce0ad16e52fa596c733c21aa020113015124b65b713"} Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.087148 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-528hw"] Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.089317 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-528hw" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="registry-server" containerID="cri-o://91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590" gracePeriod=30 Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.458645 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.568785 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-catalog-content\") pod \"3854920a-fa2c-47a2-ada5-887c5c0d0019\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.568929 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-utilities\") pod \"3854920a-fa2c-47a2-ada5-887c5c0d0019\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.570133 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkfhq\" (UniqueName: \"kubernetes.io/projected/3854920a-fa2c-47a2-ada5-887c5c0d0019-kube-api-access-xkfhq\") pod \"3854920a-fa2c-47a2-ada5-887c5c0d0019\" (UID: \"3854920a-fa2c-47a2-ada5-887c5c0d0019\") " Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.573630 4697 generic.go:334] "Generic (PLEG): container finished" podID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerID="91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590" exitCode=0 Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.573685 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-528hw" event={"ID":"3854920a-fa2c-47a2-ada5-887c5c0d0019","Type":"ContainerDied","Data":"91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590"} Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.573722 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-528hw" event={"ID":"3854920a-fa2c-47a2-ada5-887c5c0d0019","Type":"ContainerDied","Data":"43a2b3b8a71f8ecca652750f5533b2bf29db67752000455398a90821137d88c4"} Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.573746 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-528hw" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.573753 4697 scope.go:117] "RemoveContainer" containerID="91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.575514 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-utilities" (OuterVolumeSpecName: "utilities") pod "3854920a-fa2c-47a2-ada5-887c5c0d0019" (UID: "3854920a-fa2c-47a2-ada5-887c5c0d0019"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.577689 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3854920a-fa2c-47a2-ada5-887c5c0d0019-kube-api-access-xkfhq" (OuterVolumeSpecName: "kube-api-access-xkfhq") pod "3854920a-fa2c-47a2-ada5-887c5c0d0019" (UID: "3854920a-fa2c-47a2-ada5-887c5c0d0019"). InnerVolumeSpecName "kube-api-access-xkfhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.595312 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3854920a-fa2c-47a2-ada5-887c5c0d0019" (UID: "3854920a-fa2c-47a2-ada5-887c5c0d0019"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.599064 4697 scope.go:117] "RemoveContainer" containerID="a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.612963 4697 scope.go:117] "RemoveContainer" containerID="0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.629011 4697 scope.go:117] "RemoveContainer" containerID="91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590" Jan 26 00:21:13 crc kubenswrapper[4697]: E0126 00:21:13.629591 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590\": container with ID starting with 91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590 not found: ID does not exist" containerID="91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.629656 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590"} err="failed to get container status \"91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590\": rpc error: code = NotFound desc = could not find container \"91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590\": container with ID starting with 91221d6f626362322f5c9214b10f882d404d754152eeccae978d4e87c6a05590 not found: ID does not exist" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.629691 4697 scope.go:117] "RemoveContainer" containerID="a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12" Jan 26 00:21:13 crc kubenswrapper[4697]: E0126 00:21:13.630158 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12\": container with ID starting with a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12 not found: ID does not exist" containerID="a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.630201 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12"} err="failed to get container status \"a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12\": rpc error: code = NotFound desc = could not find container \"a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12\": container with ID starting with a6b14de01c70207964183fea165ba3745eb550f9753ed2caedde141eacce0d12 not found: ID does not exist" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.630228 4697 scope.go:117] "RemoveContainer" containerID="0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e" Jan 26 00:21:13 crc kubenswrapper[4697]: E0126 00:21:13.630622 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e\": container with ID starting with 0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e not found: ID does not exist" containerID="0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.630694 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e"} err="failed to get container status \"0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e\": rpc error: code = NotFound desc = could not find container \"0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e\": container with ID starting with 0b2ddc50953fb03ca208336b6466244bb37fb246330ccc84580856b137c1bd2e not found: ID does not exist" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.673196 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.673244 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3854920a-fa2c-47a2-ada5-887c5c0d0019-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.673255 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkfhq\" (UniqueName: \"kubernetes.io/projected/3854920a-fa2c-47a2-ada5-887c5c0d0019-kube-api-access-xkfhq\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.911985 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-528hw"] Jan 26 00:21:13 crc kubenswrapper[4697]: I0126 00:21:13.916281 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-528hw"] Jan 26 00:21:14 crc kubenswrapper[4697]: I0126 00:21:14.667848 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" path="/var/lib/kubelet/pods/3854920a-fa2c-47a2-ada5-887c5c0d0019/volumes" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.967765 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6"] Jan 26 00:21:16 crc kubenswrapper[4697]: E0126 00:21:16.968902 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="registry-server" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.968923 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="registry-server" Jan 26 00:21:16 crc kubenswrapper[4697]: E0126 00:21:16.969329 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="extract-utilities" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.969354 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="extract-utilities" Jan 26 00:21:16 crc kubenswrapper[4697]: E0126 00:21:16.969383 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="extract-content" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.969392 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="extract-content" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.969776 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="3854920a-fa2c-47a2-ada5-887c5c0d0019" containerName="registry-server" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.972337 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.975659 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 00:21:16 crc kubenswrapper[4697]: I0126 00:21:16.987436 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6"] Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.116244 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65kqd\" (UniqueName: \"kubernetes.io/projected/aca9920c-df77-483a-a8ca-3bba0549b6cb-kube-api-access-65kqd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.116323 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.116345 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.217719 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65kqd\" (UniqueName: \"kubernetes.io/projected/aca9920c-df77-483a-a8ca-3bba0549b6cb-kube-api-access-65kqd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.217933 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.218002 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.218486 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.218486 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.237177 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65kqd\" (UniqueName: \"kubernetes.io/projected/aca9920c-df77-483a-a8ca-3bba0549b6cb-kube-api-access-65kqd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.288984 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:17 crc kubenswrapper[4697]: I0126 00:21:17.690288 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6"] Jan 26 00:21:18 crc kubenswrapper[4697]: I0126 00:21:18.611267 4697 generic.go:334] "Generic (PLEG): container finished" podID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerID="608bbcfc5b8b8eb26fcb42b82a8fba6c7cf4767a2d646ff4942ec5e0e78c7186" exitCode=0 Jan 26 00:21:18 crc kubenswrapper[4697]: I0126 00:21:18.611320 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" event={"ID":"aca9920c-df77-483a-a8ca-3bba0549b6cb","Type":"ContainerDied","Data":"608bbcfc5b8b8eb26fcb42b82a8fba6c7cf4767a2d646ff4942ec5e0e78c7186"} Jan 26 00:21:18 crc kubenswrapper[4697]: I0126 00:21:18.611720 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" event={"ID":"aca9920c-df77-483a-a8ca-3bba0549b6cb","Type":"ContainerStarted","Data":"12bdad3af89c4e1587050b6dbc18a286faad525f03dfba03c0e1cac895d3020b"} Jan 26 00:21:18 crc kubenswrapper[4697]: I0126 00:21:18.612835 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.486345 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-htkn4"] Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.487590 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.491289 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htkn4"] Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.653116 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-utilities\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.653612 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvqz2\" (UniqueName: \"kubernetes.io/projected/b43c7038-6f77-4343-b869-21a102a66aa5-kube-api-access-zvqz2\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.653650 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-catalog-content\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.755518 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvqz2\" (UniqueName: \"kubernetes.io/projected/b43c7038-6f77-4343-b869-21a102a66aa5-kube-api-access-zvqz2\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.755581 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-catalog-content\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.755659 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-utilities\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.756273 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-catalog-content\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.756691 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-utilities\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.776567 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvqz2\" (UniqueName: \"kubernetes.io/projected/b43c7038-6f77-4343-b869-21a102a66aa5-kube-api-access-zvqz2\") pod \"redhat-operators-htkn4\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:19 crc kubenswrapper[4697]: I0126 00:21:19.808135 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:20 crc kubenswrapper[4697]: I0126 00:21:20.242921 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htkn4"] Jan 26 00:21:20 crc kubenswrapper[4697]: W0126 00:21:20.255281 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43c7038_6f77_4343_b869_21a102a66aa5.slice/crio-e2ea970f3aba2e1362398efc70e66695295ff40164aeb06dcb9d329385e8d19d WatchSource:0}: Error finding container e2ea970f3aba2e1362398efc70e66695295ff40164aeb06dcb9d329385e8d19d: Status 404 returned error can't find the container with id e2ea970f3aba2e1362398efc70e66695295ff40164aeb06dcb9d329385e8d19d Jan 26 00:21:20 crc kubenswrapper[4697]: I0126 00:21:20.622305 4697 generic.go:334] "Generic (PLEG): container finished" podID="b43c7038-6f77-4343-b869-21a102a66aa5" containerID="d910081b6c2e382107862d982e6df4e1bdb8841014861aeafd5bb25854589961" exitCode=0 Jan 26 00:21:20 crc kubenswrapper[4697]: I0126 00:21:20.622381 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htkn4" event={"ID":"b43c7038-6f77-4343-b869-21a102a66aa5","Type":"ContainerDied","Data":"d910081b6c2e382107862d982e6df4e1bdb8841014861aeafd5bb25854589961"} Jan 26 00:21:20 crc kubenswrapper[4697]: I0126 00:21:20.622459 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htkn4" event={"ID":"b43c7038-6f77-4343-b869-21a102a66aa5","Type":"ContainerStarted","Data":"e2ea970f3aba2e1362398efc70e66695295ff40164aeb06dcb9d329385e8d19d"} Jan 26 00:21:20 crc kubenswrapper[4697]: I0126 00:21:20.624410 4697 generic.go:334] "Generic (PLEG): container finished" podID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerID="46ffe3199d67abdb183798c31819dfec17cc5aef1acbf989f573bb7b323588f9" exitCode=0 Jan 26 00:21:20 crc kubenswrapper[4697]: I0126 00:21:20.624458 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" event={"ID":"aca9920c-df77-483a-a8ca-3bba0549b6cb","Type":"ContainerDied","Data":"46ffe3199d67abdb183798c31819dfec17cc5aef1acbf989f573bb7b323588f9"} Jan 26 00:21:21 crc kubenswrapper[4697]: I0126 00:21:21.631774 4697 generic.go:334] "Generic (PLEG): container finished" podID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerID="1b82b75ac8d6c6bee28d1356356921ec150b99cfae73cba4a62b0fa3af438fbf" exitCode=0 Jan 26 00:21:21 crc kubenswrapper[4697]: I0126 00:21:21.631839 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" event={"ID":"aca9920c-df77-483a-a8ca-3bba0549b6cb","Type":"ContainerDied","Data":"1b82b75ac8d6c6bee28d1356356921ec150b99cfae73cba4a62b0fa3af438fbf"} Jan 26 00:21:21 crc kubenswrapper[4697]: I0126 00:21:21.634215 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htkn4" event={"ID":"b43c7038-6f77-4343-b869-21a102a66aa5","Type":"ContainerStarted","Data":"6ad7b170d6af875788e76f2a739d27de7e60d4b375166f540f7edeb47c1d5228"} Jan 26 00:21:22 crc kubenswrapper[4697]: I0126 00:21:22.642752 4697 generic.go:334] "Generic (PLEG): container finished" podID="b43c7038-6f77-4343-b869-21a102a66aa5" containerID="6ad7b170d6af875788e76f2a739d27de7e60d4b375166f540f7edeb47c1d5228" exitCode=0 Jan 26 00:21:22 crc kubenswrapper[4697]: I0126 00:21:22.642842 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htkn4" event={"ID":"b43c7038-6f77-4343-b869-21a102a66aa5","Type":"ContainerDied","Data":"6ad7b170d6af875788e76f2a739d27de7e60d4b375166f540f7edeb47c1d5228"} Jan 26 00:21:22 crc kubenswrapper[4697]: I0126 00:21:22.919876 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.098211 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-bundle\") pod \"aca9920c-df77-483a-a8ca-3bba0549b6cb\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.098472 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65kqd\" (UniqueName: \"kubernetes.io/projected/aca9920c-df77-483a-a8ca-3bba0549b6cb-kube-api-access-65kqd\") pod \"aca9920c-df77-483a-a8ca-3bba0549b6cb\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.098509 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-util\") pod \"aca9920c-df77-483a-a8ca-3bba0549b6cb\" (UID: \"aca9920c-df77-483a-a8ca-3bba0549b6cb\") " Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.100917 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-bundle" (OuterVolumeSpecName: "bundle") pod "aca9920c-df77-483a-a8ca-3bba0549b6cb" (UID: "aca9920c-df77-483a-a8ca-3bba0549b6cb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.103597 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aca9920c-df77-483a-a8ca-3bba0549b6cb-kube-api-access-65kqd" (OuterVolumeSpecName: "kube-api-access-65kqd") pod "aca9920c-df77-483a-a8ca-3bba0549b6cb" (UID: "aca9920c-df77-483a-a8ca-3bba0549b6cb"). InnerVolumeSpecName "kube-api-access-65kqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.113815 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-util" (OuterVolumeSpecName: "util") pod "aca9920c-df77-483a-a8ca-3bba0549b6cb" (UID: "aca9920c-df77-483a-a8ca-3bba0549b6cb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.199488 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65kqd\" (UniqueName: \"kubernetes.io/projected/aca9920c-df77-483a-a8ca-3bba0549b6cb-kube-api-access-65kqd\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.199522 4697 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.199532 4697 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aca9920c-df77-483a-a8ca-3bba0549b6cb-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.655207 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" event={"ID":"aca9920c-df77-483a-a8ca-3bba0549b6cb","Type":"ContainerDied","Data":"12bdad3af89c4e1587050b6dbc18a286faad525f03dfba03c0e1cac895d3020b"} Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.655770 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12bdad3af89c4e1587050b6dbc18a286faad525f03dfba03c0e1cac895d3020b" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.655252 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6" Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.662930 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htkn4" event={"ID":"b43c7038-6f77-4343-b869-21a102a66aa5","Type":"ContainerStarted","Data":"189c7f400ab1508e797531819e293b6c3053855a17a9c04507dfe865f036a3fe"} Jan 26 00:21:23 crc kubenswrapper[4697]: I0126 00:21:23.690428 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-htkn4" podStartSLOduration=2.17427775 podStartE2EDuration="4.690399058s" podCreationTimestamp="2026-01-26 00:21:19 +0000 UTC" firstStartedPulling="2026-01-26 00:21:20.623988613 +0000 UTC m=+822.260766003" lastFinishedPulling="2026-01-26 00:21:23.140109921 +0000 UTC m=+824.776887311" observedRunningTime="2026-01-26 00:21:23.68567866 +0000 UTC m=+825.322456050" watchObservedRunningTime="2026-01-26 00:21:23.690399058 +0000 UTC m=+825.327176448" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.720552 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k"] Jan 26 00:21:26 crc kubenswrapper[4697]: E0126 00:21:26.721683 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerName="util" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.721705 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerName="util" Jan 26 00:21:26 crc kubenswrapper[4697]: E0126 00:21:26.721738 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerName="pull" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.721751 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerName="pull" Jan 26 00:21:26 crc kubenswrapper[4697]: E0126 00:21:26.721764 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerName="extract" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.721773 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerName="extract" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.721937 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="aca9920c-df77-483a-a8ca-3bba0549b6cb" containerName="extract" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.723186 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.725124 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.740376 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k"] Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.848397 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.848462 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc52r\" (UniqueName: \"kubernetes.io/projected/b29c2002-ed3e-4018-84b9-c9760d243cb7-kube-api-access-wc52r\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.848500 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.949751 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc52r\" (UniqueName: \"kubernetes.io/projected/b29c2002-ed3e-4018-84b9-c9760d243cb7-kube-api-access-wc52r\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.949895 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.949984 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.950706 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.950725 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:26 crc kubenswrapper[4697]: I0126 00:21:26.977914 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc52r\" (UniqueName: \"kubernetes.io/projected/b29c2002-ed3e-4018-84b9-c9760d243cb7-kube-api-access-wc52r\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.041546 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.303634 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k"] Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.682745 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" event={"ID":"b29c2002-ed3e-4018-84b9-c9760d243cb7","Type":"ContainerStarted","Data":"c0b8d70c30d4cf4b9be6e5c8e12ff0dedd6258b341d9b3d81c4b61b5602b379f"} Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.737467 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk"] Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.738444 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.750165 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk"] Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.860434 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.860535 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frd4h\" (UniqueName: \"kubernetes.io/projected/4c2b0caf-95fc-4900-b54d-365c27b99671-kube-api-access-frd4h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.860619 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.961939 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.962386 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.962663 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frd4h\" (UniqueName: \"kubernetes.io/projected/4c2b0caf-95fc-4900-b54d-365c27b99671-kube-api-access-frd4h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.962859 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.963415 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:27 crc kubenswrapper[4697]: I0126 00:21:27.990056 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frd4h\" (UniqueName: \"kubernetes.io/projected/4c2b0caf-95fc-4900-b54d-365c27b99671-kube-api-access-frd4h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:28 crc kubenswrapper[4697]: I0126 00:21:28.054438 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:28 crc kubenswrapper[4697]: I0126 00:21:28.573806 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk"] Jan 26 00:21:28 crc kubenswrapper[4697]: I0126 00:21:28.688482 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" event={"ID":"4c2b0caf-95fc-4900-b54d-365c27b99671","Type":"ContainerStarted","Data":"1e962c1308e9bad4e788cdd3a31bdd9c334a22eb1977cca1c628746cea0ef1a7"} Jan 26 00:21:29 crc kubenswrapper[4697]: I0126 00:21:29.698600 4697 generic.go:334] "Generic (PLEG): container finished" podID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerID="e2df0e500c7b70f14b2f77d1987b41b94e81a91a83c94ab0727528b6e35407d5" exitCode=0 Jan 26 00:21:29 crc kubenswrapper[4697]: I0126 00:21:29.699129 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" event={"ID":"b29c2002-ed3e-4018-84b9-c9760d243cb7","Type":"ContainerDied","Data":"e2df0e500c7b70f14b2f77d1987b41b94e81a91a83c94ab0727528b6e35407d5"} Jan 26 00:21:29 crc kubenswrapper[4697]: I0126 00:21:29.703316 4697 generic.go:334] "Generic (PLEG): container finished" podID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerID="72bde7133c1a3273b28b6e551f00708c342f16784be9fb036857f976e8207c61" exitCode=0 Jan 26 00:21:29 crc kubenswrapper[4697]: I0126 00:21:29.703449 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" event={"ID":"4c2b0caf-95fc-4900-b54d-365c27b99671","Type":"ContainerDied","Data":"72bde7133c1a3273b28b6e551f00708c342f16784be9fb036857f976e8207c61"} Jan 26 00:21:29 crc kubenswrapper[4697]: I0126 00:21:29.825172 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:29 crc kubenswrapper[4697]: I0126 00:21:29.825225 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:29 crc kubenswrapper[4697]: I0126 00:21:29.895301 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:30 crc kubenswrapper[4697]: I0126 00:21:30.916538 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:31 crc kubenswrapper[4697]: I0126 00:21:31.727457 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stb4b"] Jan 26 00:21:31 crc kubenswrapper[4697]: I0126 00:21:31.743917 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:31 crc kubenswrapper[4697]: I0126 00:21:31.902774 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-utilities\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:31 crc kubenswrapper[4697]: I0126 00:21:31.902839 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksw8z\" (UniqueName: \"kubernetes.io/projected/5d6c6048-7761-4e30-8b56-82abcf5c8937-kube-api-access-ksw8z\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:31 crc kubenswrapper[4697]: I0126 00:21:31.902876 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-catalog-content\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.005703 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-utilities\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.005782 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksw8z\" (UniqueName: \"kubernetes.io/projected/5d6c6048-7761-4e30-8b56-82abcf5c8937-kube-api-access-ksw8z\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.005825 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-catalog-content\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.006359 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-catalog-content\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.006464 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-utilities\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.007960 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stb4b"] Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.076654 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksw8z\" (UniqueName: \"kubernetes.io/projected/5d6c6048-7761-4e30-8b56-82abcf5c8937-kube-api-access-ksw8z\") pod \"certified-operators-stb4b\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.199569 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.733345 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" event={"ID":"b29c2002-ed3e-4018-84b9-c9760d243cb7","Type":"ContainerStarted","Data":"e6a2d0af2b8e18041c9fbe378cfbaad40875995197cca245158fea0c2e0c416a"} Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.897399 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8"] Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.898469 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.917754 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8"] Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.932552 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.932616 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhz2q\" (UniqueName: \"kubernetes.io/projected/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-kube-api-access-bhz2q\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:32 crc kubenswrapper[4697]: I0126 00:21:32.932648 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:33 crc kubenswrapper[4697]: I0126 00:21:33.034598 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:33 crc kubenswrapper[4697]: I0126 00:21:33.034724 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:33 crc kubenswrapper[4697]: I0126 00:21:33.034799 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhz2q\" (UniqueName: \"kubernetes.io/projected/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-kube-api-access-bhz2q\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:33 crc kubenswrapper[4697]: I0126 00:21:33.035378 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:33 crc kubenswrapper[4697]: I0126 00:21:33.035706 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:33 crc kubenswrapper[4697]: I0126 00:21:33.056430 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhz2q\" (UniqueName: \"kubernetes.io/projected/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-kube-api-access-bhz2q\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:33 crc kubenswrapper[4697]: I0126 00:21:33.211789 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:21:34 crc kubenswrapper[4697]: I0126 00:21:34.288810 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htkn4"] Jan 26 00:21:34 crc kubenswrapper[4697]: I0126 00:21:34.289045 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-htkn4" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="registry-server" containerID="cri-o://189c7f400ab1508e797531819e293b6c3053855a17a9c04507dfe865f036a3fe" gracePeriod=2 Jan 26 00:21:34 crc kubenswrapper[4697]: I0126 00:21:34.775651 4697 generic.go:334] "Generic (PLEG): container finished" podID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerID="e6a2d0af2b8e18041c9fbe378cfbaad40875995197cca245158fea0c2e0c416a" exitCode=0 Jan 26 00:21:34 crc kubenswrapper[4697]: I0126 00:21:34.775864 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" event={"ID":"b29c2002-ed3e-4018-84b9-c9760d243cb7","Type":"ContainerDied","Data":"e6a2d0af2b8e18041c9fbe378cfbaad40875995197cca245158fea0c2e0c416a"} Jan 26 00:21:35 crc kubenswrapper[4697]: I0126 00:21:35.812467 4697 generic.go:334] "Generic (PLEG): container finished" podID="b43c7038-6f77-4343-b869-21a102a66aa5" containerID="189c7f400ab1508e797531819e293b6c3053855a17a9c04507dfe865f036a3fe" exitCode=0 Jan 26 00:21:35 crc kubenswrapper[4697]: I0126 00:21:35.812690 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htkn4" event={"ID":"b43c7038-6f77-4343-b869-21a102a66aa5","Type":"ContainerDied","Data":"189c7f400ab1508e797531819e293b6c3053855a17a9c04507dfe865f036a3fe"} Jan 26 00:21:35 crc kubenswrapper[4697]: I0126 00:21:35.994736 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8"] Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.300266 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stb4b"] Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.543268 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.647349 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvqz2\" (UniqueName: \"kubernetes.io/projected/b43c7038-6f77-4343-b869-21a102a66aa5-kube-api-access-zvqz2\") pod \"b43c7038-6f77-4343-b869-21a102a66aa5\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.647499 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-catalog-content\") pod \"b43c7038-6f77-4343-b869-21a102a66aa5\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.647523 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-utilities\") pod \"b43c7038-6f77-4343-b869-21a102a66aa5\" (UID: \"b43c7038-6f77-4343-b869-21a102a66aa5\") " Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.648780 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-utilities" (OuterVolumeSpecName: "utilities") pod "b43c7038-6f77-4343-b869-21a102a66aa5" (UID: "b43c7038-6f77-4343-b869-21a102a66aa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.654045 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43c7038-6f77-4343-b869-21a102a66aa5-kube-api-access-zvqz2" (OuterVolumeSpecName: "kube-api-access-zvqz2") pod "b43c7038-6f77-4343-b869-21a102a66aa5" (UID: "b43c7038-6f77-4343-b869-21a102a66aa5"). InnerVolumeSpecName "kube-api-access-zvqz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.749601 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.749646 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvqz2\" (UniqueName: \"kubernetes.io/projected/b43c7038-6f77-4343-b869-21a102a66aa5-kube-api-access-zvqz2\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.824190 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" event={"ID":"b29c2002-ed3e-4018-84b9-c9760d243cb7","Type":"ContainerStarted","Data":"cb57b658b683a3ed46ef75384cc60d87123fd729ecf8c0d9fda5c176903527bf"} Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.825489 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stb4b" event={"ID":"5d6c6048-7761-4e30-8b56-82abcf5c8937","Type":"ContainerStarted","Data":"fa7c4b0069d4c5546743c8ba87aabf9af1337a22052de7e7c0b4bfd0d9689f3b"} Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.828267 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htkn4" event={"ID":"b43c7038-6f77-4343-b869-21a102a66aa5","Type":"ContainerDied","Data":"e2ea970f3aba2e1362398efc70e66695295ff40164aeb06dcb9d329385e8d19d"} Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.828323 4697 scope.go:117] "RemoveContainer" containerID="189c7f400ab1508e797531819e293b6c3053855a17a9c04507dfe865f036a3fe" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.828282 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htkn4" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.829354 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" event={"ID":"f4412bc5-df84-4f33-9640-a98bc9e0f9cc","Type":"ContainerStarted","Data":"70ea2ea64f894ea4e181f025642a9f08b5dde82478fbbfcb2844d01b96e37745"} Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.847088 4697 scope.go:117] "RemoveContainer" containerID="6ad7b170d6af875788e76f2a739d27de7e60d4b375166f540f7edeb47c1d5228" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.881199 4697 scope.go:117] "RemoveContainer" containerID="d910081b6c2e382107862d982e6df4e1bdb8841014861aeafd5bb25854589961" Jan 26 00:21:36 crc kubenswrapper[4697]: I0126 00:21:36.966190 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b43c7038-6f77-4343-b869-21a102a66aa5" (UID: "b43c7038-6f77-4343-b869-21a102a66aa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.054710 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43c7038-6f77-4343-b869-21a102a66aa5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.162365 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htkn4"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.167492 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-htkn4"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.681193 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x"] Jan 26 00:21:37 crc kubenswrapper[4697]: E0126 00:21:37.681637 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="extract-utilities" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.681649 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="extract-utilities" Jan 26 00:21:37 crc kubenswrapper[4697]: E0126 00:21:37.681662 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="registry-server" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.681671 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="registry-server" Jan 26 00:21:37 crc kubenswrapper[4697]: E0126 00:21:37.681683 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="extract-content" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.681689 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="extract-content" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.681778 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" containerName="registry-server" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.682135 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.685239 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.685348 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-h4htl" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.700494 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.733248 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.763534 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjq9\" (UniqueName: \"kubernetes.io/projected/a0c71d7a-0767-481f-9f8d-e888252ed0f3-kube-api-access-qsjq9\") pod \"obo-prometheus-operator-68bc856cb9-cfj6x\" (UID: \"a0c71d7a-0767-481f-9f8d-e888252ed0f3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.765841 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.766717 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.771985 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-vd7xq" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.772014 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.775953 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.776765 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.789067 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.800204 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.835360 4697 generic.go:334] "Generic (PLEG): container finished" podID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerID="114990070a170da7221bd7224b4fe11f4ac696ade553d5076e4cbb465b8c8833" exitCode=0 Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.835499 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stb4b" event={"ID":"5d6c6048-7761-4e30-8b56-82abcf5c8937","Type":"ContainerDied","Data":"114990070a170da7221bd7224b4fe11f4ac696ade553d5076e4cbb465b8c8833"} Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.845844 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" event={"ID":"b29c2002-ed3e-4018-84b9-c9760d243cb7","Type":"ContainerDied","Data":"cb57b658b683a3ed46ef75384cc60d87123fd729ecf8c0d9fda5c176903527bf"} Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.845894 4697 generic.go:334] "Generic (PLEG): container finished" podID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerID="cb57b658b683a3ed46ef75384cc60d87123fd729ecf8c0d9fda5c176903527bf" exitCode=0 Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.851183 4697 generic.go:334] "Generic (PLEG): container finished" podID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerID="a7f714ab396ebd74d6aadeaffb84e5f3ddaa3c5a7d9bee54967616fc959e8935" exitCode=0 Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.851246 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" event={"ID":"4c2b0caf-95fc-4900-b54d-365c27b99671","Type":"ContainerDied","Data":"a7f714ab396ebd74d6aadeaffb84e5f3ddaa3c5a7d9bee54967616fc959e8935"} Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.856817 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerID="a51aa5cc808bd7c56d1298e3a4df1f439adf8ab2d383a7f894434a04afec5943" exitCode=0 Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.856871 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" event={"ID":"f4412bc5-df84-4f33-9640-a98bc9e0f9cc","Type":"ContainerDied","Data":"a51aa5cc808bd7c56d1298e3a4df1f439adf8ab2d383a7f894434a04afec5943"} Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.866166 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44233f32-bd83-47a6-bcee-47c8b02e5e0b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf\" (UID: \"44233f32-bd83-47a6-bcee-47c8b02e5e0b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.866210 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c53bce5-4c78-4410-94da-1feadaf217a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z\" (UID: \"1c53bce5-4c78-4410-94da-1feadaf217a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.866252 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjq9\" (UniqueName: \"kubernetes.io/projected/a0c71d7a-0767-481f-9f8d-e888252ed0f3-kube-api-access-qsjq9\") pod \"obo-prometheus-operator-68bc856cb9-cfj6x\" (UID: \"a0c71d7a-0767-481f-9f8d-e888252ed0f3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.866572 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c53bce5-4c78-4410-94da-1feadaf217a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z\" (UID: \"1c53bce5-4c78-4410-94da-1feadaf217a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.866630 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44233f32-bd83-47a6-bcee-47c8b02e5e0b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf\" (UID: \"44233f32-bd83-47a6-bcee-47c8b02e5e0b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.887459 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjq9\" (UniqueName: \"kubernetes.io/projected/a0c71d7a-0767-481f-9f8d-e888252ed0f3-kube-api-access-qsjq9\") pod \"obo-prometheus-operator-68bc856cb9-cfj6x\" (UID: \"a0c71d7a-0767-481f-9f8d-e888252ed0f3\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.930303 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wvrz9"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.930932 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.933437 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.937687 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-t8zck" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.941693 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wvrz9"] Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.967573 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c53bce5-4c78-4410-94da-1feadaf217a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z\" (UID: \"1c53bce5-4c78-4410-94da-1feadaf217a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.967630 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44233f32-bd83-47a6-bcee-47c8b02e5e0b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf\" (UID: \"44233f32-bd83-47a6-bcee-47c8b02e5e0b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.967660 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44233f32-bd83-47a6-bcee-47c8b02e5e0b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf\" (UID: \"44233f32-bd83-47a6-bcee-47c8b02e5e0b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.967682 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c53bce5-4c78-4410-94da-1feadaf217a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z\" (UID: \"1c53bce5-4c78-4410-94da-1feadaf217a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.976620 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44233f32-bd83-47a6-bcee-47c8b02e5e0b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf\" (UID: \"44233f32-bd83-47a6-bcee-47c8b02e5e0b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.976659 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44233f32-bd83-47a6-bcee-47c8b02e5e0b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf\" (UID: \"44233f32-bd83-47a6-bcee-47c8b02e5e0b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.987220 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c53bce5-4c78-4410-94da-1feadaf217a6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z\" (UID: \"1c53bce5-4c78-4410-94da-1feadaf217a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:37 crc kubenswrapper[4697]: I0126 00:21:37.987437 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c53bce5-4c78-4410-94da-1feadaf217a6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z\" (UID: \"1c53bce5-4c78-4410-94da-1feadaf217a6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.039245 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.068737 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzxk\" (UniqueName: \"kubernetes.io/projected/d715f6c3-3dad-4e23-99a7-fed27f169907-kube-api-access-ztzxk\") pod \"observability-operator-59bdc8b94-wvrz9\" (UID: \"d715f6c3-3dad-4e23-99a7-fed27f169907\") " pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.068796 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d715f6c3-3dad-4e23-99a7-fed27f169907-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wvrz9\" (UID: \"d715f6c3-3dad-4e23-99a7-fed27f169907\") " pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.080174 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.095508 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.102465 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xf6fx"] Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.103189 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.105272 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jpjpq" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.165752 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xf6fx"] Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.170187 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztzxk\" (UniqueName: \"kubernetes.io/projected/d715f6c3-3dad-4e23-99a7-fed27f169907-kube-api-access-ztzxk\") pod \"observability-operator-59bdc8b94-wvrz9\" (UID: \"d715f6c3-3dad-4e23-99a7-fed27f169907\") " pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.170242 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d715f6c3-3dad-4e23-99a7-fed27f169907-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wvrz9\" (UID: \"d715f6c3-3dad-4e23-99a7-fed27f169907\") " pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.199630 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/d715f6c3-3dad-4e23-99a7-fed27f169907-observability-operator-tls\") pod \"observability-operator-59bdc8b94-wvrz9\" (UID: \"d715f6c3-3dad-4e23-99a7-fed27f169907\") " pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.202122 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztzxk\" (UniqueName: \"kubernetes.io/projected/d715f6c3-3dad-4e23-99a7-fed27f169907-kube-api-access-ztzxk\") pod \"observability-operator-59bdc8b94-wvrz9\" (UID: \"d715f6c3-3dad-4e23-99a7-fed27f169907\") " pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.269456 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.271633 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac806297-3fe1-4e19-8a22-d98dd2bfbbfd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xf6fx\" (UID: \"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd\") " pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.271743 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrx8b\" (UniqueName: \"kubernetes.io/projected/ac806297-3fe1-4e19-8a22-d98dd2bfbbfd-kube-api-access-vrx8b\") pod \"perses-operator-5bf474d74f-xf6fx\" (UID: \"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd\") " pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.373148 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrx8b\" (UniqueName: \"kubernetes.io/projected/ac806297-3fe1-4e19-8a22-d98dd2bfbbfd-kube-api-access-vrx8b\") pod \"perses-operator-5bf474d74f-xf6fx\" (UID: \"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd\") " pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.373274 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac806297-3fe1-4e19-8a22-d98dd2bfbbfd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xf6fx\" (UID: \"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd\") " pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.374326 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac806297-3fe1-4e19-8a22-d98dd2bfbbfd-openshift-service-ca\") pod \"perses-operator-5bf474d74f-xf6fx\" (UID: \"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd\") " pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.387234 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x"] Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.399599 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrx8b\" (UniqueName: \"kubernetes.io/projected/ac806297-3fe1-4e19-8a22-d98dd2bfbbfd-kube-api-access-vrx8b\") pod \"perses-operator-5bf474d74f-xf6fx\" (UID: \"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd\") " pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.469875 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf"] Jan 26 00:21:38 crc kubenswrapper[4697]: W0126 00:21:38.483176 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44233f32_bd83_47a6_bcee_47c8b02e5e0b.slice/crio-16ad3280212b09b92507aef654b3422e7557c252b8de119318090e8a5ba0ca44 WatchSource:0}: Error finding container 16ad3280212b09b92507aef654b3422e7557c252b8de119318090e8a5ba0ca44: Status 404 returned error can't find the container with id 16ad3280212b09b92507aef654b3422e7557c252b8de119318090e8a5ba0ca44 Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.543594 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-wvrz9"] Jan 26 00:21:38 crc kubenswrapper[4697]: W0126 00:21:38.552028 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd715f6c3_3dad_4e23_99a7_fed27f169907.slice/crio-36be43ec8bf6e0f0f4d82775140d3a53a01173e1b2a74e5dfd5e7cce16cc3e1c WatchSource:0}: Error finding container 36be43ec8bf6e0f0f4d82775140d3a53a01173e1b2a74e5dfd5e7cce16cc3e1c: Status 404 returned error can't find the container with id 36be43ec8bf6e0f0f4d82775140d3a53a01173e1b2a74e5dfd5e7cce16cc3e1c Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.558821 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.596462 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z"] Jan 26 00:21:38 crc kubenswrapper[4697]: W0126 00:21:38.603124 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c53bce5_4c78_4410_94da_1feadaf217a6.slice/crio-83fa40cd9a2a0cd7dcb155c721d99af6efcaa29b18bd47aead1db99ce69688f7 WatchSource:0}: Error finding container 83fa40cd9a2a0cd7dcb155c721d99af6efcaa29b18bd47aead1db99ce69688f7: Status 404 returned error can't find the container with id 83fa40cd9a2a0cd7dcb155c721d99af6efcaa29b18bd47aead1db99ce69688f7 Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.695049 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43c7038-6f77-4343-b869-21a102a66aa5" path="/var/lib/kubelet/pods/b43c7038-6f77-4343-b869-21a102a66aa5/volumes" Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.904218 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" event={"ID":"a0c71d7a-0767-481f-9f8d-e888252ed0f3","Type":"ContainerStarted","Data":"e96affdb451fe2251bbec64f9920759d785e0e7784bd36a82e2b4fef8b399239"} Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.905847 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" event={"ID":"1c53bce5-4c78-4410-94da-1feadaf217a6","Type":"ContainerStarted","Data":"83fa40cd9a2a0cd7dcb155c721d99af6efcaa29b18bd47aead1db99ce69688f7"} Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.908593 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" event={"ID":"44233f32-bd83-47a6-bcee-47c8b02e5e0b","Type":"ContainerStarted","Data":"16ad3280212b09b92507aef654b3422e7557c252b8de119318090e8a5ba0ca44"} Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.911885 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stb4b" event={"ID":"5d6c6048-7761-4e30-8b56-82abcf5c8937","Type":"ContainerStarted","Data":"a769fb35c6d7aa214bf182de04eb58a2842c92ae16525736ebf8cc5ed7a29118"} Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.919010 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" event={"ID":"d715f6c3-3dad-4e23-99a7-fed27f169907","Type":"ContainerStarted","Data":"36be43ec8bf6e0f0f4d82775140d3a53a01173e1b2a74e5dfd5e7cce16cc3e1c"} Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.932488 4697 generic.go:334] "Generic (PLEG): container finished" podID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerID="fba2f26b64700a7bd07395f13582e0ae0fe1f45128a60334d903366a2a8bb316" exitCode=0 Jan 26 00:21:38 crc kubenswrapper[4697]: I0126 00:21:38.932803 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" event={"ID":"4c2b0caf-95fc-4900-b54d-365c27b99671","Type":"ContainerDied","Data":"fba2f26b64700a7bd07395f13582e0ae0fe1f45128a60334d903366a2a8bb316"} Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.018339 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-xf6fx"] Jan 26 00:21:39 crc kubenswrapper[4697]: W0126 00:21:39.029199 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac806297_3fe1_4e19_8a22_d98dd2bfbbfd.slice/crio-e6e5b751d695b58f8c55cf893982b509318de38d090641e363ca197edc258d3d WatchSource:0}: Error finding container e6e5b751d695b58f8c55cf893982b509318de38d090641e363ca197edc258d3d: Status 404 returned error can't find the container with id e6e5b751d695b58f8c55cf893982b509318de38d090641e363ca197edc258d3d Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.495125 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.603896 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-bundle\") pod \"b29c2002-ed3e-4018-84b9-c9760d243cb7\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.604002 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc52r\" (UniqueName: \"kubernetes.io/projected/b29c2002-ed3e-4018-84b9-c9760d243cb7-kube-api-access-wc52r\") pod \"b29c2002-ed3e-4018-84b9-c9760d243cb7\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.604032 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-util\") pod \"b29c2002-ed3e-4018-84b9-c9760d243cb7\" (UID: \"b29c2002-ed3e-4018-84b9-c9760d243cb7\") " Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.605286 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-bundle" (OuterVolumeSpecName: "bundle") pod "b29c2002-ed3e-4018-84b9-c9760d243cb7" (UID: "b29c2002-ed3e-4018-84b9-c9760d243cb7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.626286 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29c2002-ed3e-4018-84b9-c9760d243cb7-kube-api-access-wc52r" (OuterVolumeSpecName: "kube-api-access-wc52r") pod "b29c2002-ed3e-4018-84b9-c9760d243cb7" (UID: "b29c2002-ed3e-4018-84b9-c9760d243cb7"). InnerVolumeSpecName "kube-api-access-wc52r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.633148 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-util" (OuterVolumeSpecName: "util") pod "b29c2002-ed3e-4018-84b9-c9760d243cb7" (UID: "b29c2002-ed3e-4018-84b9-c9760d243cb7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.705204 4697 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.705248 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc52r\" (UniqueName: \"kubernetes.io/projected/b29c2002-ed3e-4018-84b9-c9760d243cb7-kube-api-access-wc52r\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.705265 4697 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b29c2002-ed3e-4018-84b9-c9760d243cb7-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.942914 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" event={"ID":"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd","Type":"ContainerStarted","Data":"e6e5b751d695b58f8c55cf893982b509318de38d090641e363ca197edc258d3d"} Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.946374 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" event={"ID":"b29c2002-ed3e-4018-84b9-c9760d243cb7","Type":"ContainerDied","Data":"c0b8d70c30d4cf4b9be6e5c8e12ff0dedd6258b341d9b3d81c4b61b5602b379f"} Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.946415 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b8d70c30d4cf4b9be6e5c8e12ff0dedd6258b341d9b3d81c4b61b5602b379f" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.946503 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k" Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.963641 4697 generic.go:334] "Generic (PLEG): container finished" podID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerID="a769fb35c6d7aa214bf182de04eb58a2842c92ae16525736ebf8cc5ed7a29118" exitCode=0 Jan 26 00:21:39 crc kubenswrapper[4697]: I0126 00:21:39.963800 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stb4b" event={"ID":"5d6c6048-7761-4e30-8b56-82abcf5c8937","Type":"ContainerDied","Data":"a769fb35c6d7aa214bf182de04eb58a2842c92ae16525736ebf8cc5ed7a29118"} Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.318514 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.416933 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-bundle\") pod \"4c2b0caf-95fc-4900-b54d-365c27b99671\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.417029 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-util\") pod \"4c2b0caf-95fc-4900-b54d-365c27b99671\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.417129 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frd4h\" (UniqueName: \"kubernetes.io/projected/4c2b0caf-95fc-4900-b54d-365c27b99671-kube-api-access-frd4h\") pod \"4c2b0caf-95fc-4900-b54d-365c27b99671\" (UID: \"4c2b0caf-95fc-4900-b54d-365c27b99671\") " Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.417767 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-bundle" (OuterVolumeSpecName: "bundle") pod "4c2b0caf-95fc-4900-b54d-365c27b99671" (UID: "4c2b0caf-95fc-4900-b54d-365c27b99671"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.422207 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2b0caf-95fc-4900-b54d-365c27b99671-kube-api-access-frd4h" (OuterVolumeSpecName: "kube-api-access-frd4h") pod "4c2b0caf-95fc-4900-b54d-365c27b99671" (UID: "4c2b0caf-95fc-4900-b54d-365c27b99671"). InnerVolumeSpecName "kube-api-access-frd4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.430595 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-util" (OuterVolumeSpecName: "util") pod "4c2b0caf-95fc-4900-b54d-365c27b99671" (UID: "4c2b0caf-95fc-4900-b54d-365c27b99671"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.518828 4697 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.518876 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frd4h\" (UniqueName: \"kubernetes.io/projected/4c2b0caf-95fc-4900-b54d-365c27b99671-kube-api-access-frd4h\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.518890 4697 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4c2b0caf-95fc-4900-b54d-365c27b99671-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.972983 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" event={"ID":"4c2b0caf-95fc-4900-b54d-365c27b99671","Type":"ContainerDied","Data":"1e962c1308e9bad4e788cdd3a31bdd9c334a22eb1977cca1c628746cea0ef1a7"} Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.973853 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e962c1308e9bad4e788cdd3a31bdd9c334a22eb1977cca1c628746cea0ef1a7" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.973000 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk" Jan 26 00:21:40 crc kubenswrapper[4697]: I0126 00:21:40.978089 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stb4b" event={"ID":"5d6c6048-7761-4e30-8b56-82abcf5c8937","Type":"ContainerStarted","Data":"89b24c2d69582e0943cfbb9810d224ad159e0a009d9dc63c2ed9521733198c2f"} Jan 26 00:21:41 crc kubenswrapper[4697]: I0126 00:21:41.006833 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stb4b" podStartSLOduration=7.380747257 podStartE2EDuration="10.006815265s" podCreationTimestamp="2026-01-26 00:21:31 +0000 UTC" firstStartedPulling="2026-01-26 00:21:37.840885135 +0000 UTC m=+839.477662525" lastFinishedPulling="2026-01-26 00:21:40.466953143 +0000 UTC m=+842.103730533" observedRunningTime="2026-01-26 00:21:41.005003452 +0000 UTC m=+842.641780842" watchObservedRunningTime="2026-01-26 00:21:41.006815265 +0000 UTC m=+842.643592665" Jan 26 00:21:42 crc kubenswrapper[4697]: I0126 00:21:42.200518 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:42 crc kubenswrapper[4697]: I0126 00:21:42.201012 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:43 crc kubenswrapper[4697]: I0126 00:21:43.276420 4697 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-stb4b" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="registry-server" probeResult="failure" output=< Jan 26 00:21:43 crc kubenswrapper[4697]: timeout: failed to connect service ":50051" within 1s Jan 26 00:21:43 crc kubenswrapper[4697]: > Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.193115 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-9kxsz"] Jan 26 00:21:48 crc kubenswrapper[4697]: E0126 00:21:48.193834 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerName="util" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.193848 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerName="util" Jan 26 00:21:48 crc kubenswrapper[4697]: E0126 00:21:48.193859 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerName="util" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.193864 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerName="util" Jan 26 00:21:48 crc kubenswrapper[4697]: E0126 00:21:48.193876 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerName="extract" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.193883 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerName="extract" Jan 26 00:21:48 crc kubenswrapper[4697]: E0126 00:21:48.193892 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerName="pull" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.193897 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerName="pull" Jan 26 00:21:48 crc kubenswrapper[4697]: E0126 00:21:48.193905 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerName="pull" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.193911 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerName="pull" Jan 26 00:21:48 crc kubenswrapper[4697]: E0126 00:21:48.193920 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerName="extract" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.193925 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerName="extract" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.194031 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29c2002-ed3e-4018-84b9-c9760d243cb7" containerName="extract" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.194047 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2b0caf-95fc-4900-b54d-365c27b99671" containerName="extract" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.194490 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.197479 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-9dpk2" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.197784 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.198023 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.204031 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-9kxsz"] Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.381755 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8sbw\" (UniqueName: \"kubernetes.io/projected/93eead71-820d-4dff-99ca-c6e770c80fb8-kube-api-access-d8sbw\") pod \"interconnect-operator-5bb49f789d-9kxsz\" (UID: \"93eead71-820d-4dff-99ca-c6e770c80fb8\") " pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.487700 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8sbw\" (UniqueName: \"kubernetes.io/projected/93eead71-820d-4dff-99ca-c6e770c80fb8-kube-api-access-d8sbw\") pod \"interconnect-operator-5bb49f789d-9kxsz\" (UID: \"93eead71-820d-4dff-99ca-c6e770c80fb8\") " pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.536046 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8sbw\" (UniqueName: \"kubernetes.io/projected/93eead71-820d-4dff-99ca-c6e770c80fb8-kube-api-access-d8sbw\") pod \"interconnect-operator-5bb49f789d-9kxsz\" (UID: \"93eead71-820d-4dff-99ca-c6e770c80fb8\") " pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" Jan 26 00:21:48 crc kubenswrapper[4697]: I0126 00:21:48.901859 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" Jan 26 00:21:51 crc kubenswrapper[4697]: I0126 00:21:51.992018 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-8b67ff9b6-6nbzq"] Jan 26 00:21:51 crc kubenswrapper[4697]: I0126 00:21:51.992963 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:51 crc kubenswrapper[4697]: I0126 00:21:51.996745 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-qtdsw" Jan 26 00:21:51 crc kubenswrapper[4697]: I0126 00:21:51.996756 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.007835 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-8b67ff9b6-6nbzq"] Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.150574 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66492d55-a876-4ddc-849b-1e84aa96feae-webhook-cert\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.150667 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66492d55-a876-4ddc-849b-1e84aa96feae-apiservice-cert\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.150702 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4rz\" (UniqueName: \"kubernetes.io/projected/66492d55-a876-4ddc-849b-1e84aa96feae-kube-api-access-rq4rz\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.246661 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.251693 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66492d55-a876-4ddc-849b-1e84aa96feae-apiservice-cert\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.251752 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq4rz\" (UniqueName: \"kubernetes.io/projected/66492d55-a876-4ddc-849b-1e84aa96feae-kube-api-access-rq4rz\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.251792 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66492d55-a876-4ddc-849b-1e84aa96feae-webhook-cert\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.257675 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66492d55-a876-4ddc-849b-1e84aa96feae-apiservice-cert\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.276732 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq4rz\" (UniqueName: \"kubernetes.io/projected/66492d55-a876-4ddc-849b-1e84aa96feae-kube-api-access-rq4rz\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.280847 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66492d55-a876-4ddc-849b-1e84aa96feae-webhook-cert\") pod \"elastic-operator-8b67ff9b6-6nbzq\" (UID: \"66492d55-a876-4ddc-849b-1e84aa96feae\") " pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.294308 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:21:52 crc kubenswrapper[4697]: I0126 00:21:52.310355 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" Jan 26 00:21:53 crc kubenswrapper[4697]: E0126 00:21:53.103696 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" Jan 26 00:21:53 crc kubenswrapper[4697]: E0126 00:21:53.103890 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhz2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_openshift-marketplace(f4412bc5-df84-4f33-9640-a98bc9e0f9cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 00:21:53 crc kubenswrapper[4697]: E0126 00:21:53.105045 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" Jan 26 00:21:53 crc kubenswrapper[4697]: E0126 00:21:53.357371 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407\\\"\"" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" Jan 26 00:21:53 crc kubenswrapper[4697]: E0126 00:21:53.848818 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 26 00:21:53 crc kubenswrapper[4697]: E0126 00:21:53.849016 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z_openshift-operators(1c53bce5-4c78-4410-94da-1feadaf217a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:21:53 crc kubenswrapper[4697]: E0126 00:21:53.850272 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" podUID="1c53bce5-4c78-4410-94da-1feadaf217a6" Jan 26 00:21:54 crc kubenswrapper[4697]: E0126 00:21:54.361654 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" podUID="1c53bce5-4c78-4410-94da-1feadaf217a6" Jan 26 00:21:54 crc kubenswrapper[4697]: E0126 00:21:54.429721 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 26 00:21:54 crc kubenswrapper[4697]: E0126 00:21:54.429927 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf_openshift-operators(44233f32-bd83-47a6-bcee-47c8b02e5e0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:21:54 crc kubenswrapper[4697]: E0126 00:21:54.431292 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" podUID="44233f32-bd83-47a6-bcee-47c8b02e5e0b" Jan 26 00:21:55 crc kubenswrapper[4697]: E0126 00:21:55.367379 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" podUID="44233f32-bd83-47a6-bcee-47c8b02e5e0b" Jan 26 00:21:55 crc kubenswrapper[4697]: E0126 00:21:55.429031 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 26 00:21:55 crc kubenswrapper[4697]: E0126 00:21:55.429464 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qsjq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-cfj6x_openshift-operators(a0c71d7a-0767-481f-9f8d-e888252ed0f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:21:55 crc kubenswrapper[4697]: E0126 00:21:55.430902 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" podUID="a0c71d7a-0767-481f-9f8d-e888252ed0f3" Jan 26 00:21:55 crc kubenswrapper[4697]: I0126 00:21:55.480596 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stb4b"] Jan 26 00:21:55 crc kubenswrapper[4697]: I0126 00:21:55.481247 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stb4b" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="registry-server" containerID="cri-o://89b24c2d69582e0943cfbb9810d224ad159e0a009d9dc63c2ed9521733198c2f" gracePeriod=2 Jan 26 00:21:56 crc kubenswrapper[4697]: I0126 00:21:56.372493 4697 generic.go:334] "Generic (PLEG): container finished" podID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerID="89b24c2d69582e0943cfbb9810d224ad159e0a009d9dc63c2ed9521733198c2f" exitCode=0 Jan 26 00:21:56 crc kubenswrapper[4697]: I0126 00:21:56.372598 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stb4b" event={"ID":"5d6c6048-7761-4e30-8b56-82abcf5c8937","Type":"ContainerDied","Data":"89b24c2d69582e0943cfbb9810d224ad159e0a009d9dc63c2ed9521733198c2f"} Jan 26 00:21:56 crc kubenswrapper[4697]: E0126 00:21:56.373755 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" podUID="a0c71d7a-0767-481f-9f8d-e888252ed0f3" Jan 26 00:21:59 crc kubenswrapper[4697]: E0126 00:21:59.851242 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" Jan 26 00:21:59 crc kubenswrapper[4697]: E0126 00:21:59.851642 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:1b555e21bba7c609111ace4380382a696d9aceeb6e9816bf9023b8f689b6c741,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:e797cdb47beef40b04da7b6d645bca3dc32e6247003c45b56b38efd9e13bf01c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:7d662a120305e2528acc7e9142b770b5b6a7f4932ddfcadfa4ac953935124895,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:75465aabb0aa427a5c531a8fcde463f6d119afbcc618ebcbf6b7ee9bc8aad160,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:dc18c8d6a4a9a0a574a57cc5082c8a9b26023bd6d69b9732892d584c1dfe5070,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:369729978cecdc13c99ef3d179f8eb8a450a4a0cb70b63c27a55a15d1710ba27,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:d8c7a61d147f62b204d5c5f16864386025393453c9a81ea327bbd25d7765d611,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:b4a6eb1cc118a4334b424614959d8b7f361ddd779b3a72690ca49b0a3f26d9b8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:21d4fff670893ba4b7fbc528cd49f8b71c8281cede9ef84f0697065bb6a7fc50,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:12d9dbe297a1c3b9df671f21156992082bc483887d851fafe76e5d17321ff474,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:e65c37f04f6d76a0cbfe05edb3cddf6a8f14f859ee35cf3aebea8fcb991d2c19,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:48e4e178c6eeaa9d5dd77a591c185a311b4b4a5caadb7199d48463123e31dc9e,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztzxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-59bdc8b94-wvrz9_openshift-operators(d715f6c3-3dad-4e23-99a7-fed27f169907): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:21:59 crc kubenswrapper[4697]: E0126 00:21:59.853722 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" podUID="d715f6c3-3dad-4e23-99a7-fed27f169907" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.301250 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.406352 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-catalog-content\") pod \"5d6c6048-7761-4e30-8b56-82abcf5c8937\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.406395 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksw8z\" (UniqueName: \"kubernetes.io/projected/5d6c6048-7761-4e30-8b56-82abcf5c8937-kube-api-access-ksw8z\") pod \"5d6c6048-7761-4e30-8b56-82abcf5c8937\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.406420 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-utilities\") pod \"5d6c6048-7761-4e30-8b56-82abcf5c8937\" (UID: \"5d6c6048-7761-4e30-8b56-82abcf5c8937\") " Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.407498 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-utilities" (OuterVolumeSpecName: "utilities") pod "5d6c6048-7761-4e30-8b56-82abcf5c8937" (UID: "5d6c6048-7761-4e30-8b56-82abcf5c8937"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.412043 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d6c6048-7761-4e30-8b56-82abcf5c8937-kube-api-access-ksw8z" (OuterVolumeSpecName: "kube-api-access-ksw8z") pod "5d6c6048-7761-4e30-8b56-82abcf5c8937" (UID: "5d6c6048-7761-4e30-8b56-82abcf5c8937"). InnerVolumeSpecName "kube-api-access-ksw8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.454050 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d6c6048-7761-4e30-8b56-82abcf5c8937" (UID: "5d6c6048-7761-4e30-8b56-82abcf5c8937"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.493806 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-9kxsz"] Jan 26 00:22:00 crc kubenswrapper[4697]: W0126 00:22:00.496776 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93eead71_820d_4dff_99ca_c6e770c80fb8.slice/crio-f99d858e22bd7f870a919b503de4c32bd2dadfabbc16fbb98abb3ff5155af714 WatchSource:0}: Error finding container f99d858e22bd7f870a919b503de4c32bd2dadfabbc16fbb98abb3ff5155af714: Status 404 returned error can't find the container with id f99d858e22bd7f870a919b503de4c32bd2dadfabbc16fbb98abb3ff5155af714 Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.507460 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.507495 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksw8z\" (UniqueName: \"kubernetes.io/projected/5d6c6048-7761-4e30-8b56-82abcf5c8937-kube-api-access-ksw8z\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.507507 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d6c6048-7761-4e30-8b56-82abcf5c8937-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.511096 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" event={"ID":"ac806297-3fe1-4e19-8a22-d98dd2bfbbfd","Type":"ContainerStarted","Data":"92d11bd5b52d2ae575d78b5f11d0cea3075ae0693e9d20b9d5e86023bde03133"} Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.511218 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.512238 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" event={"ID":"93eead71-820d-4dff-99ca-c6e770c80fb8","Type":"ContainerStarted","Data":"f99d858e22bd7f870a919b503de4c32bd2dadfabbc16fbb98abb3ff5155af714"} Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.515007 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stb4b" event={"ID":"5d6c6048-7761-4e30-8b56-82abcf5c8937","Type":"ContainerDied","Data":"fa7c4b0069d4c5546743c8ba87aabf9af1337a22052de7e7c0b4bfd0d9689f3b"} Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.515057 4697 scope.go:117] "RemoveContainer" containerID="89b24c2d69582e0943cfbb9810d224ad159e0a009d9dc63c2ed9521733198c2f" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.515134 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stb4b" Jan 26 00:22:00 crc kubenswrapper[4697]: E0126 00:22:00.516499 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c\\\"\"" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" podUID="d715f6c3-3dad-4e23-99a7-fed27f169907" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.543698 4697 scope.go:117] "RemoveContainer" containerID="a769fb35c6d7aa214bf182de04eb58a2842c92ae16525736ebf8cc5ed7a29118" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.548699 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" podStartSLOduration=1.788878644 podStartE2EDuration="22.548673046s" podCreationTimestamp="2026-01-26 00:21:38 +0000 UTC" firstStartedPulling="2026-01-26 00:21:39.04312801 +0000 UTC m=+840.679905400" lastFinishedPulling="2026-01-26 00:21:59.802922412 +0000 UTC m=+861.439699802" observedRunningTime="2026-01-26 00:22:00.536277334 +0000 UTC m=+862.173054734" watchObservedRunningTime="2026-01-26 00:22:00.548673046 +0000 UTC m=+862.185450436" Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.550864 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-8b67ff9b6-6nbzq"] Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.568598 4697 scope.go:117] "RemoveContainer" containerID="114990070a170da7221bd7224b4fe11f4ac696ade553d5076e4cbb465b8c8833" Jan 26 00:22:00 crc kubenswrapper[4697]: W0126 00:22:00.577628 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66492d55_a876_4ddc_849b_1e84aa96feae.slice/crio-137c8ddf93bc627cfac4f203e951b5b644ad1fae371ddac8a7ec6b0b85cca44b WatchSource:0}: Error finding container 137c8ddf93bc627cfac4f203e951b5b644ad1fae371ddac8a7ec6b0b85cca44b: Status 404 returned error can't find the container with id 137c8ddf93bc627cfac4f203e951b5b644ad1fae371ddac8a7ec6b0b85cca44b Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.590028 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stb4b"] Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.595831 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stb4b"] Jan 26 00:22:00 crc kubenswrapper[4697]: I0126 00:22:00.667744 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" path="/var/lib/kubelet/pods/5d6c6048-7761-4e30-8b56-82abcf5c8937/volumes" Jan 26 00:22:01 crc kubenswrapper[4697]: I0126 00:22:01.527292 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" event={"ID":"66492d55-a876-4ddc-849b-1e84aa96feae","Type":"ContainerStarted","Data":"137c8ddf93bc627cfac4f203e951b5b644ad1fae371ddac8a7ec6b0b85cca44b"} Jan 26 00:22:04 crc kubenswrapper[4697]: I0126 00:22:04.572585 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" event={"ID":"66492d55-a876-4ddc-849b-1e84aa96feae","Type":"ContainerStarted","Data":"beac4383cc709b43d0704ee5a21da22fa586fbaa34b18e759dfe131268848356"} Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.366716 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-8b67ff9b6-6nbzq" podStartSLOduration=11.296947098 podStartE2EDuration="14.366697869s" podCreationTimestamp="2026-01-26 00:21:51 +0000 UTC" firstStartedPulling="2026-01-26 00:22:00.592608616 +0000 UTC m=+862.229386006" lastFinishedPulling="2026-01-26 00:22:03.662359387 +0000 UTC m=+865.299136777" observedRunningTime="2026-01-26 00:22:04.598795194 +0000 UTC m=+866.235572584" watchObservedRunningTime="2026-01-26 00:22:05.366697869 +0000 UTC m=+867.003475259" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.368969 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:05 crc kubenswrapper[4697]: E0126 00:22:05.369232 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="extract-content" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.369252 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="extract-content" Jan 26 00:22:05 crc kubenswrapper[4697]: E0126 00:22:05.369268 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="extract-utilities" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.369278 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="extract-utilities" Jan 26 00:22:05 crc kubenswrapper[4697]: E0126 00:22:05.369293 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="registry-server" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.369300 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="registry-server" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.369419 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d6c6048-7761-4e30-8b56-82abcf5c8937" containerName="registry-server" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.373668 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.376141 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.377255 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.377262 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.377569 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.377894 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.378351 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.378529 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.380234 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.380321 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-bmn4d" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.384822 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388195 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388253 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388297 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388316 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388333 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388354 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388371 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388392 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388409 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388430 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ed09d994-82cd-421d-b986-a5a68398fc8b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388446 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388461 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388480 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388496 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.388516 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.489859 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.489940 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.489968 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.489999 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490026 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490050 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490091 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490113 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490136 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ed09d994-82cd-421d-b986-a5a68398fc8b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490161 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490182 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490206 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490230 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490249 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490295 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.490696 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.491849 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.492091 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.492530 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.493248 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.494423 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.494679 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.494862 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.496242 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.497092 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ed09d994-82cd-421d-b986-a5a68398fc8b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.497735 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.498510 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.498688 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.507468 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.523824 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ed09d994-82cd-421d-b986-a5a68398fc8b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ed09d994-82cd-421d-b986-a5a68398fc8b\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.694683 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:22:05 crc kubenswrapper[4697]: I0126 00:22:05.990510 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:05 crc kubenswrapper[4697]: W0126 00:22:05.998729 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded09d994_82cd_421d_b986_a5a68398fc8b.slice/crio-ff72c5928f86c0f955d75e5acdf8101532afb6257008538359c0e8aa60ec14f5 WatchSource:0}: Error finding container ff72c5928f86c0f955d75e5acdf8101532afb6257008538359c0e8aa60ec14f5: Status 404 returned error can't find the container with id ff72c5928f86c0f955d75e5acdf8101532afb6257008538359c0e8aa60ec14f5 Jan 26 00:22:06 crc kubenswrapper[4697]: I0126 00:22:06.585833 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ed09d994-82cd-421d-b986-a5a68398fc8b","Type":"ContainerStarted","Data":"ff72c5928f86c0f955d75e5acdf8101532afb6257008538359c0e8aa60ec14f5"} Jan 26 00:22:08 crc kubenswrapper[4697]: I0126 00:22:08.563013 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-xf6fx" Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.729096 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" event={"ID":"93eead71-820d-4dff-99ca-c6e770c80fb8","Type":"ContainerStarted","Data":"1a9923e1cf92c8c5b22d10ee4490864d142e7830ab1603d9f9a75e7c764367e1"} Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.732589 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" event={"ID":"1c53bce5-4c78-4410-94da-1feadaf217a6","Type":"ContainerStarted","Data":"dfc2d7fad754e2010b92c89563d7fed62d290d8ac8c8acaa370174b67be5c4b3"} Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.734678 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" event={"ID":"44233f32-bd83-47a6-bcee-47c8b02e5e0b","Type":"ContainerStarted","Data":"f6fec26e1283de3b40cdfe005f2958cd48a7a60d9dba3989af2d71a217f860e9"} Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.736651 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" event={"ID":"a0c71d7a-0767-481f-9f8d-e888252ed0f3","Type":"ContainerStarted","Data":"438386a31aafbdbd972ee9829f8d070b6796d96d64ef3d8a36397eccae0ff63c"} Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.738684 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerID="6c409d7792bcfe0f81a93c96c2372d53d41af52c45fa98eb98464a7724e29cc9" exitCode=0 Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.738708 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" event={"ID":"f4412bc5-df84-4f33-9640-a98bc9e0f9cc","Type":"ContainerDied","Data":"6c409d7792bcfe0f81a93c96c2372d53d41af52c45fa98eb98464a7724e29cc9"} Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.750195 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-9kxsz" podStartSLOduration=13.815879886 podStartE2EDuration="27.750178376s" podCreationTimestamp="2026-01-26 00:21:48 +0000 UTC" firstStartedPulling="2026-01-26 00:22:00.498654778 +0000 UTC m=+862.135432168" lastFinishedPulling="2026-01-26 00:22:14.432953268 +0000 UTC m=+876.069730658" observedRunningTime="2026-01-26 00:22:15.749349544 +0000 UTC m=+877.386126934" watchObservedRunningTime="2026-01-26 00:22:15.750178376 +0000 UTC m=+877.386955766" Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.771122 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf" podStartSLOduration=2.828332003 podStartE2EDuration="38.771106903s" podCreationTimestamp="2026-01-26 00:21:37 +0000 UTC" firstStartedPulling="2026-01-26 00:21:38.485676724 +0000 UTC m=+840.122454124" lastFinishedPulling="2026-01-26 00:22:14.428451634 +0000 UTC m=+876.065229024" observedRunningTime="2026-01-26 00:22:15.768203763 +0000 UTC m=+877.404981163" watchObservedRunningTime="2026-01-26 00:22:15.771106903 +0000 UTC m=+877.407884293" Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.806454 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-cfj6x" podStartSLOduration=2.663906227 podStartE2EDuration="38.806437046s" podCreationTimestamp="2026-01-26 00:21:37 +0000 UTC" firstStartedPulling="2026-01-26 00:21:38.404867832 +0000 UTC m=+840.041645222" lastFinishedPulling="2026-01-26 00:22:14.547398651 +0000 UTC m=+876.184176041" observedRunningTime="2026-01-26 00:22:15.802533329 +0000 UTC m=+877.439310719" watchObservedRunningTime="2026-01-26 00:22:15.806437046 +0000 UTC m=+877.443214436" Jan 26 00:22:15 crc kubenswrapper[4697]: I0126 00:22:15.855618 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z" podStartSLOduration=3.033754148 podStartE2EDuration="38.85559617s" podCreationTimestamp="2026-01-26 00:21:37 +0000 UTC" firstStartedPulling="2026-01-26 00:21:38.614217152 +0000 UTC m=+840.250994542" lastFinishedPulling="2026-01-26 00:22:14.436059174 +0000 UTC m=+876.072836564" observedRunningTime="2026-01-26 00:22:15.854304375 +0000 UTC m=+877.491081765" watchObservedRunningTime="2026-01-26 00:22:15.85559617 +0000 UTC m=+877.492373560" Jan 26 00:22:16 crc kubenswrapper[4697]: I0126 00:22:16.746444 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" event={"ID":"f4412bc5-df84-4f33-9640-a98bc9e0f9cc","Type":"ContainerStarted","Data":"e3cafcd4cbeea2b669c58f4c6db6194e9921052bb836b97658f756b007cb50eb"} Jan 26 00:22:16 crc kubenswrapper[4697]: I0126 00:22:16.748814 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" event={"ID":"d715f6c3-3dad-4e23-99a7-fed27f169907","Type":"ContainerStarted","Data":"0e60d05f520e9f91ef8ed63c91128656d699d30a032ea88a60f591fc128b074a"} Jan 26 00:22:16 crc kubenswrapper[4697]: I0126 00:22:16.749584 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:22:16 crc kubenswrapper[4697]: I0126 00:22:16.773892 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" podStartSLOduration=8.197087157 podStartE2EDuration="44.773877839s" podCreationTimestamp="2026-01-26 00:21:32 +0000 UTC" firstStartedPulling="2026-01-26 00:21:37.859159039 +0000 UTC m=+839.495936429" lastFinishedPulling="2026-01-26 00:22:14.435949721 +0000 UTC m=+876.072727111" observedRunningTime="2026-01-26 00:22:16.771027711 +0000 UTC m=+878.407805111" watchObservedRunningTime="2026-01-26 00:22:16.773877839 +0000 UTC m=+878.410655229" Jan 26 00:22:16 crc kubenswrapper[4697]: I0126 00:22:16.775160 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" Jan 26 00:22:16 crc kubenswrapper[4697]: I0126 00:22:16.799761 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-wvrz9" podStartSLOduration=2.735316118 podStartE2EDuration="39.799747492s" podCreationTimestamp="2026-01-26 00:21:37 +0000 UTC" firstStartedPulling="2026-01-26 00:21:38.55463252 +0000 UTC m=+840.191409910" lastFinishedPulling="2026-01-26 00:22:15.619063894 +0000 UTC m=+877.255841284" observedRunningTime="2026-01-26 00:22:16.798508558 +0000 UTC m=+878.435285948" watchObservedRunningTime="2026-01-26 00:22:16.799747492 +0000 UTC m=+878.436524882" Jan 26 00:22:17 crc kubenswrapper[4697]: I0126 00:22:17.755769 4697 generic.go:334] "Generic (PLEG): container finished" podID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerID="e3cafcd4cbeea2b669c58f4c6db6194e9921052bb836b97658f756b007cb50eb" exitCode=0 Jan 26 00:22:17 crc kubenswrapper[4697]: I0126 00:22:17.755961 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" event={"ID":"f4412bc5-df84-4f33-9640-a98bc9e0f9cc","Type":"ContainerDied","Data":"e3cafcd4cbeea2b669c58f4c6db6194e9921052bb836b97658f756b007cb50eb"} Jan 26 00:22:30 crc kubenswrapper[4697]: I0126 00:22:30.811517 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.064238 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-bundle\") pod \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.064353 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-util\") pod \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.064410 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhz2q\" (UniqueName: \"kubernetes.io/projected/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-kube-api-access-bhz2q\") pod \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\" (UID: \"f4412bc5-df84-4f33-9640-a98bc9e0f9cc\") " Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.065347 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-bundle" (OuterVolumeSpecName: "bundle") pod "f4412bc5-df84-4f33-9640-a98bc9e0f9cc" (UID: "f4412bc5-df84-4f33-9640-a98bc9e0f9cc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.074160 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-util" (OuterVolumeSpecName: "util") pod "f4412bc5-df84-4f33-9640-a98bc9e0f9cc" (UID: "f4412bc5-df84-4f33-9640-a98bc9e0f9cc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.103891 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" event={"ID":"f4412bc5-df84-4f33-9640-a98bc9e0f9cc","Type":"ContainerDied","Data":"70ea2ea64f894ea4e181f025642a9f08b5dde82478fbbfcb2844d01b96e37745"} Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.103939 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70ea2ea64f894ea4e181f025642a9f08b5dde82478fbbfcb2844d01b96e37745" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.103980 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.112023 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-kube-api-access-bhz2q" (OuterVolumeSpecName: "kube-api-access-bhz2q") pod "f4412bc5-df84-4f33-9640-a98bc9e0f9cc" (UID: "f4412bc5-df84-4f33-9640-a98bc9e0f9cc"). InnerVolumeSpecName "kube-api-access-bhz2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.176267 4697 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.176308 4697 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-util\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:31 crc kubenswrapper[4697]: I0126 00:22:31.176317 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhz2q\" (UniqueName: \"kubernetes.io/projected/f4412bc5-df84-4f33-9640-a98bc9e0f9cc-kube-api-access-bhz2q\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.016827 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:34 crc kubenswrapper[4697]: E0126 00:22:34.018037 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerName="extract" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.018132 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerName="extract" Jan 26 00:22:34 crc kubenswrapper[4697]: E0126 00:22:34.018199 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerName="util" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.018272 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerName="util" Jan 26 00:22:34 crc kubenswrapper[4697]: E0126 00:22:34.018334 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerName="pull" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.018392 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerName="pull" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.018561 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4412bc5-df84-4f33-9640-a98bc9e0f9cc" containerName="extract" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.019305 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.022956 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.023038 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-pfrjl" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.022974 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.023426 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.033850 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125309 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dgs\" (UniqueName: \"kubernetes.io/projected/e7327eaa-54f4-4f29-9041-f223ae46a6a4-kube-api-access-z8dgs\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125350 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125373 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125393 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125411 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125430 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125452 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125472 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125492 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125508 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125525 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.125540 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.226663 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8dgs\" (UniqueName: \"kubernetes.io/projected/e7327eaa-54f4-4f29-9041-f223ae46a6a4-kube-api-access-z8dgs\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.226719 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.226746 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.226767 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.226827 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.226984 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.227561 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.227600 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.227635 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228112 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.227674 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228209 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228237 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228264 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228325 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228539 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.228677 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.229292 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.229533 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.229606 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.233837 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.243408 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8dgs\" (UniqueName: \"kubernetes.io/projected/e7327eaa-54f4-4f29-9041-f223ae46a6a4-kube-api-access-z8dgs\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.263524 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:34 crc kubenswrapper[4697]: I0126 00:22:34.340957 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.386473 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q"] Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.387635 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.391156 4697 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-gsqlj" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.393043 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.398255 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.410698 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q"] Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.553441 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32ac253d-f078-4e81-9f97-2c184570631d-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-gq65q\" (UID: \"32ac253d-f078-4e81-9f97-2c184570631d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.553488 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gh5z\" (UniqueName: \"kubernetes.io/projected/32ac253d-f078-4e81-9f97-2c184570631d-kube-api-access-5gh5z\") pod \"cert-manager-operator-controller-manager-5446d6888b-gq65q\" (UID: \"32ac253d-f078-4e81-9f97-2c184570631d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.655013 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32ac253d-f078-4e81-9f97-2c184570631d-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-gq65q\" (UID: \"32ac253d-f078-4e81-9f97-2c184570631d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.655086 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gh5z\" (UniqueName: \"kubernetes.io/projected/32ac253d-f078-4e81-9f97-2c184570631d-kube-api-access-5gh5z\") pod \"cert-manager-operator-controller-manager-5446d6888b-gq65q\" (UID: \"32ac253d-f078-4e81-9f97-2c184570631d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.655834 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32ac253d-f078-4e81-9f97-2c184570631d-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-gq65q\" (UID: \"32ac253d-f078-4e81-9f97-2c184570631d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.674554 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gh5z\" (UniqueName: \"kubernetes.io/projected/32ac253d-f078-4e81-9f97-2c184570631d-kube-api-access-5gh5z\") pod \"cert-manager-operator-controller-manager-5446d6888b-gq65q\" (UID: \"32ac253d-f078-4e81-9f97-2c184570631d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:37 crc kubenswrapper[4697]: I0126 00:22:37.747292 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" Jan 26 00:22:41 crc kubenswrapper[4697]: E0126 00:22:41.369218 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Jan 26 00:22:41 crc kubenswrapper[4697]: E0126 00:22:41.369909 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(ed09d994-82cd-421d-b986-a5a68398fc8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 00:22:41 crc kubenswrapper[4697]: E0126 00:22:41.371446 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" Jan 26 00:22:41 crc kubenswrapper[4697]: I0126 00:22:41.608006 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:41 crc kubenswrapper[4697]: W0126 00:22:41.620186 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7327eaa_54f4_4f29_9041_f223ae46a6a4.slice/crio-ea9b57bd2c189204421bf5b4e17737278b8cad87ac51116305dc3b706981678e WatchSource:0}: Error finding container ea9b57bd2c189204421bf5b4e17737278b8cad87ac51116305dc3b706981678e: Status 404 returned error can't find the container with id ea9b57bd2c189204421bf5b4e17737278b8cad87ac51116305dc3b706981678e Jan 26 00:22:41 crc kubenswrapper[4697]: I0126 00:22:41.781410 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q"] Jan 26 00:22:41 crc kubenswrapper[4697]: W0126 00:22:41.793528 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32ac253d_f078_4e81_9f97_2c184570631d.slice/crio-a576aebdcb802bbba67307c1e831bc3468c607554a2a7063f8bb464192e5df53 WatchSource:0}: Error finding container a576aebdcb802bbba67307c1e831bc3468c607554a2a7063f8bb464192e5df53: Status 404 returned error can't find the container with id a576aebdcb802bbba67307c1e831bc3468c607554a2a7063f8bb464192e5df53 Jan 26 00:22:42 crc kubenswrapper[4697]: I0126 00:22:42.278313 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" event={"ID":"32ac253d-f078-4e81-9f97-2c184570631d","Type":"ContainerStarted","Data":"a576aebdcb802bbba67307c1e831bc3468c607554a2a7063f8bb464192e5df53"} Jan 26 00:22:42 crc kubenswrapper[4697]: I0126 00:22:42.279449 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"e7327eaa-54f4-4f29-9041-f223ae46a6a4","Type":"ContainerStarted","Data":"ea9b57bd2c189204421bf5b4e17737278b8cad87ac51116305dc3b706981678e"} Jan 26 00:22:42 crc kubenswrapper[4697]: E0126 00:22:42.280811 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" Jan 26 00:22:42 crc kubenswrapper[4697]: I0126 00:22:42.431537 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:42 crc kubenswrapper[4697]: I0126 00:22:42.462957 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 26 00:22:43 crc kubenswrapper[4697]: E0126 00:22:43.287291 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" Jan 26 00:22:44 crc kubenswrapper[4697]: E0126 00:22:44.294562 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" Jan 26 00:22:44 crc kubenswrapper[4697]: I0126 00:22:44.555213 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.504615 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.505895 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.508690 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.509012 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.509248 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.518985 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526192 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526247 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526285 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwqt5\" (UniqueName: \"kubernetes.io/projected/06a59088-4a71-4b38-ae32-ef82bec71c07-kube-api-access-mwqt5\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526317 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526340 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526361 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526391 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526417 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526452 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526475 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526494 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.526510 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627766 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627820 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627844 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627863 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627882 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627918 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627944 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627965 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwqt5\" (UniqueName: \"kubernetes.io/projected/06a59088-4a71-4b38-ae32-ef82bec71c07-kube-api-access-mwqt5\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.627984 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.628001 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.628016 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.628037 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.628109 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.628485 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.629538 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.629808 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.629931 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.630250 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.630705 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.630951 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.631251 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.633991 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.646650 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.647620 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwqt5\" (UniqueName: \"kubernetes.io/projected/06a59088-4a71-4b38-ae32-ef82bec71c07-kube-api-access-mwqt5\") pod \"service-telemetry-operator-2-build\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:46 crc kubenswrapper[4697]: I0126 00:22:46.876266 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:22:57 crc kubenswrapper[4697]: E0126 00:22:57.285724 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe" Jan 26 00:22:57 crc kubenswrapper[4697]: E0126 00:22:57.286621 4697 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 26 00:22:57 crc kubenswrapper[4697]: init container &Container{Name:manage-dockerfile,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe,Command:[],Args:[openshift-manage-dockerfile --v=0],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:BUILD,Value:{"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"service-telemetry-operator-1","namespace":"service-telemetry","uid":"2126cbeb-da2a-49e2-bc6a-b91d07163e1e","resourceVersion":"33640","generation":1,"creationTimestamp":"2026-01-26T00:22:33Z","labels":{"build":"service-telemetry-operator","buildconfig":"service-telemetry-operator","openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"service-telemetry-operator","uid":"c3c3d085-e9a0-4eef-b7a1-6851609717b3","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2026-01-26T00:22:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c3c3d085-e9a0-4eef-b7a1-6851609717b3\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM quay.io/operator-framework/ansible-operator:v1.38.1\n\n# temporarily switch to root user to adjust image layers\nUSER 0\n# Upstream CI builds need the additional EPEL sources for python3-passlib and python3-bcrypt but have no working repos to install epel-release\n# NO_PROXY is undefined in upstream CI builds, but defined (usually blank) during openshift builds (a possibly brittle hack)\nRUN bash -c -- 'if [ \"${NO_PROXY:-__ZZZZZ}\" == \"__ZZZZZ\" ]; then echo \"Applying upstream EPEL hacks\" \u0026\u0026 echo -e \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\nmQINBGE3mOsBEACsU+XwJWDJVkItBaugXhXIIkb9oe+7aadELuVo0kBmc3HXt/Yp\\nCJW9hHEiGZ6z2jwgPqyJjZhCvcAWvgzKcvqE+9i0NItV1rzfxrBe2BtUtZmVcuE6\\n2b+SPfxQ2Hr8llaawRjt8BCFX/ZzM4/1Qk+EzlfTcEcpkMf6wdO7kD6ulBk/tbsW\\nDHX2lNcxszTf+XP9HXHWJlA2xBfP+Dk4gl4DnO2Y1xR0OSywE/QtvEbN5cY94ieu\\nn7CBy29AleMhmbnx9pw3NyxcFIAsEZHJoU4ZW9ulAJ/ogttSyAWeacW7eJGW31/Z\\n39cS+I4KXJgeGRI20RmpqfH0tuT+X5Da59YpjYxkbhSK3HYBVnNPhoJFUc2j5iKy\\nXLgkapu1xRnEJhw05kr4LCbud0NTvfecqSqa+59kuVc+zWmfTnGTYc0PXZ6Oa3rK\\n44UOmE6eAT5zd/ToleDO0VesN+EO7CXfRsm7HWGpABF5wNK3vIEF2uRr2VJMvgqS\\n9eNwhJyOzoca4xFSwCkc6dACGGkV+CqhufdFBhmcAsUotSxe3zmrBjqA0B/nxIvH\\nDVgOAMnVCe+Lmv8T0mFgqZSJdIUdKjnOLu/GRFhjDKIak4jeMBMTYpVnU+HhMHLq\\nuDiZkNEvEEGhBQmZuI8J55F/a6UURnxUwT3piyi3Pmr2IFD7ahBxPzOBCQARAQAB\\ntCdGZWRvcmEgKGVwZWw5KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAk4EEwEI\\nADgWIQT/itE0RZcQbs6BO5GKOHK/MihGfAUCYTeY6wIbDwULCQgHAgYVCgkICwIE\\nFgIDAQIeAQIXgAAKCRCKOHK/MihGfFX/EACBPWv20+ttYu1A5WvtHJPzwbj0U4yF\\n3zTQpBglQ2UfkRpYdipTlT3Ih6j5h2VmgRPtINCc/ZE28adrWpBoeFIS2YAKOCLC\\nnZYtHl2nCoLq1U7FSttUGsZ/t8uGCBgnugTfnIYcmlP1jKKA6RJAclK89evDQX5n\\nR9ZD+Cq3CBMlttvSTCht0qQVlwycedH8iWyYgP/mF0W35BIn7NuuZwWhgR00n/VG\\n4nbKPOzTWbsP45awcmivdrS74P6mL84WfkghipdmcoyVb1B8ZP4Y/Ke0RXOnLhNe\\nCfrXXvuW+Pvg2RTfwRDtehGQPAgXbmLmz2ZkV69RGIr54HJv84NDbqZovRTMr7gL\\n9k3ciCzXCiYQgM8yAyGHV0KEhFSQ1HV7gMnt9UmxbxBE2pGU7vu3CwjYga5DpwU7\\nw5wu1TmM5KgZtZvuWOTDnqDLf0cKoIbW8FeeCOn24elcj32bnQDuF9DPey1mqcvT\\n/yEo/Ushyz6CVYxN8DGgcy2M9JOsnmjDx02h6qgWGWDuKgb9jZrvRedpAQCeemEd\\nfhEs6ihqVxRFl16HxC4EVijybhAL76SsM2nbtIqW1apBQJQpXWtQwwdvgTVpdEtE\\nr4ArVJYX5LrswnWEQMOelugUG6S3ZjMfcyOa/O0364iY73vyVgaYK+2XtT2usMux\\nVL469Kj5m13T6w==\\n=Mjs/\\n-----END PGP PUBLIC KEY BLOCK-----\" \u003e /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 \u0026\u0026 echo -e \"[epel]\\nname=Extra Packages for Enterprise Linux 9 - \\$basearch\\nmetalink=https://mirrors.fedoraproject.org/metalink?repo=epel-9\u0026arch=\\$basearch\u0026infra=\\$infra\u0026content=\\$contentdir\\nenabled=1\\ngpgcheck=1\\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9\" \u003e /etc/yum.repos.d/epel.repo; fi'\n\n# update the base image to allow forward-looking optimistic updates during the testing phase, with the added benefit of helping move closer to passing security scans.\n# -- excludes ansible so it remains at 2.9 tag as shipped with the base image\n# -- installs python3-passlib and python3-bcrypt for oauth-proxy interface\n# -- cleans up the cached data from dnf to keep the image as small as possible\nRUN dnf update -y --exclude=ansible* \u0026\u0026 dnf install -y python3-passlib python3-bcrypt \u0026\u0026 dnf clean all \u0026\u0026 rm -rf /var/cache/dnf\n\nCOPY requirements.yml ${HOME}/requirements.yml\nRUN ansible-galaxy collection install -r ${HOME}/requirements.yml \\\n \u0026\u0026 chmod -R ug+rwx ${HOME}/.ansible\n\n# switch back to user 1001 when running the base image (non-root)\nUSER 1001\n\n# copy in required artifacts for the operator\nCOPY watches.yaml ${HOME}/watches.yaml\nCOPY roles/ ${HOME}/roles/\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e"},"pullSecret":{"name":"builder-dockercfg-pfrjl"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest"},"pushSecret":{"name":"builder-dockercfg-pfrjl"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e","fromRef":{"kind":"ImageStreamTag","name":"ansible-operator:v1.38.1"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest","config":{"kind":"BuildConfig","namespace":"service-telemetry","name":"service-telemetry-operator"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2026-01-26T00:22:33Z","lastTransitionTime":"2026-01-26T00:22:33Z"}]}} Jan 26 00:22:57 crc kubenswrapper[4697]: ,ValueFrom:nil,},EnvVar{Name:LANG,Value:C.utf8,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/registries.conf,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_DIR_PATH,Value:/var/run/configs/openshift.io/build-system/registries.d,ValueFrom:nil,},EnvVar{Name:BUILD_SIGNATURE_POLICY_PATH,Value:/var/run/configs/openshift.io/build-system/policy.json,ValueFrom:nil,},EnvVar{Name:BUILD_STORAGE_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/storage.conf,ValueFrom:nil,},EnvVar{Name:BUILD_BLOBCACHE_DIR,Value:/var/cache/blobs,ValueFrom:nil,},EnvVar{Name:HTTP_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:http_proxy,Value:,ValueFrom:nil,},EnvVar{Name:HTTPS_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:https_proxy,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:no_proxy,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:buildworkdir,ReadOnly:false,MountPath:/tmp/build,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-system-configs,ReadOnly:true,MountPath:/var/run/configs/openshift.io/build-system,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-proxy-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-blob-cache,ReadOnly:false,MountPath:/var/cache/blobs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8dgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[CHOWN DAC_OVERRIDE],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-1-build_service-telemetry(e7327eaa-54f4-4f29-9041-f223ae46a6a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 26 00:22:57 crc kubenswrapper[4697]: > logger="UnhandledError" Jan 26 00:22:57 crc kubenswrapper[4697]: E0126 00:22:57.287794 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manage-dockerfile\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-1-build" podUID="e7327eaa-54f4-4f29-9041-f223ae46a6a4" Jan 26 00:22:57 crc kubenswrapper[4697]: I0126 00:22:57.643758 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" event={"ID":"32ac253d-f078-4e81-9f97-2c184570631d","Type":"ContainerStarted","Data":"aa71a9b335e1a564536d67636824439e6a8605974489171f0c0a301c3975c4e1"} Jan 26 00:22:57 crc kubenswrapper[4697]: I0126 00:22:57.646123 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ed09d994-82cd-421d-b986-a5a68398fc8b","Type":"ContainerStarted","Data":"be5872c4ff2005b9e5bdcd7c3b62da5306e0984e3f0e6c71e2b588e311d1f224"} Jan 26 00:22:57 crc kubenswrapper[4697]: I0126 00:22:57.819056 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-gq65q" podStartSLOduration=5.237841377 podStartE2EDuration="20.819034903s" podCreationTimestamp="2026-01-26 00:22:37 +0000 UTC" firstStartedPulling="2026-01-26 00:22:41.796754433 +0000 UTC m=+903.433531823" lastFinishedPulling="2026-01-26 00:22:57.377947959 +0000 UTC m=+919.014725349" observedRunningTime="2026-01-26 00:22:57.670289726 +0000 UTC m=+919.307067136" watchObservedRunningTime="2026-01-26 00:22:57.819034903 +0000 UTC m=+919.455812303" Jan 26 00:22:57 crc kubenswrapper[4697]: I0126 00:22:57.900686 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.026560 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159447 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-pull\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159558 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildworkdir\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159625 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-ca-bundles\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159644 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-proxy-ca-bundles\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159668 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-node-pullsecrets\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159705 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-root\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159736 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-push\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159755 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8dgs\" (UniqueName: \"kubernetes.io/projected/e7327eaa-54f4-4f29-9041-f223ae46a6a4-kube-api-access-z8dgs\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159770 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-blob-cache\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159783 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159806 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-run\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159882 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-system-configs\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.159927 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildcachedir\") pod \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\" (UID: \"e7327eaa-54f4-4f29-9041-f223ae46a6a4\") " Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.160172 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.160361 4697 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.160375 4697 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7327eaa-54f4-4f29-9041-f223ae46a6a4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.160704 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.160919 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.161020 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.161051 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.161149 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.161310 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.161385 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.165024 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-pull" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-pull") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "builder-dockercfg-pfrjl-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.165145 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-push" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-push") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "builder-dockercfg-pfrjl-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.165268 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7327eaa-54f4-4f29-9041-f223ae46a6a4-kube-api-access-z8dgs" (OuterVolumeSpecName: "kube-api-access-z8dgs") pod "e7327eaa-54f4-4f29-9041-f223ae46a6a4" (UID: "e7327eaa-54f4-4f29-9041-f223ae46a6a4"). InnerVolumeSpecName "kube-api-access-z8dgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262102 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262146 4697 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262159 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262174 4697 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262187 4697 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262198 4697 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262209 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262220 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/e7327eaa-54f4-4f29-9041-f223ae46a6a4-builder-dockercfg-pfrjl-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262231 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8dgs\" (UniqueName: \"kubernetes.io/projected/e7327eaa-54f4-4f29-9041-f223ae46a6a4-kube-api-access-z8dgs\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.262241 4697 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e7327eaa-54f4-4f29-9041-f223ae46a6a4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.652876 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerStarted","Data":"e204e8b283552c7cd1c90268181df7c6dbc59f90943ec0d24755237e843c7928"} Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.654585 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"e7327eaa-54f4-4f29-9041-f223ae46a6a4","Type":"ContainerDied","Data":"ea9b57bd2c189204421bf5b4e17737278b8cad87ac51116305dc3b706981678e"} Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.654728 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.741713 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:58 crc kubenswrapper[4697]: I0126 00:22:58.752974 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 26 00:22:59 crc kubenswrapper[4697]: I0126 00:22:59.679930 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerStarted","Data":"14d93e2a18793766edbbab4a2a32bd3ef3f1838fb9487c7ec8dc896c092904ec"} Jan 26 00:22:59 crc kubenswrapper[4697]: I0126 00:22:59.681378 4697 generic.go:334] "Generic (PLEG): container finished" podID="ed09d994-82cd-421d-b986-a5a68398fc8b" containerID="be5872c4ff2005b9e5bdcd7c3b62da5306e0984e3f0e6c71e2b588e311d1f224" exitCode=0 Jan 26 00:22:59 crc kubenswrapper[4697]: I0126 00:22:59.681421 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ed09d994-82cd-421d-b986-a5a68398fc8b","Type":"ContainerDied","Data":"be5872c4ff2005b9e5bdcd7c3b62da5306e0984e3f0e6c71e2b588e311d1f224"} Jan 26 00:23:00 crc kubenswrapper[4697]: I0126 00:23:00.673701 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7327eaa-54f4-4f29-9041-f223ae46a6a4" path="/var/lib/kubelet/pods/e7327eaa-54f4-4f29-9041-f223ae46a6a4/volumes" Jan 26 00:23:00 crc kubenswrapper[4697]: I0126 00:23:00.694558 4697 generic.go:334] "Generic (PLEG): container finished" podID="ed09d994-82cd-421d-b986-a5a68398fc8b" containerID="793300239cccba559b1cfb079e73721a39fcd8044fdd7757e3963f1c504387fc" exitCode=0 Jan 26 00:23:00 crc kubenswrapper[4697]: I0126 00:23:00.695344 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ed09d994-82cd-421d-b986-a5a68398fc8b","Type":"ContainerDied","Data":"793300239cccba559b1cfb079e73721a39fcd8044fdd7757e3963f1c504387fc"} Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.128639 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v"] Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.129967 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.132784 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.132825 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.132911 4697 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-69ccx" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.145742 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v"] Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.224630 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6ee241-91b4-4953-a69c-ce370643f47c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5hw4v\" (UID: \"6c6ee241-91b4-4953-a69c-ce370643f47c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.224688 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjwnn\" (UniqueName: \"kubernetes.io/projected/6c6ee241-91b4-4953-a69c-ce370643f47c-kube-api-access-zjwnn\") pod \"cert-manager-cainjector-855d9ccff4-5hw4v\" (UID: \"6c6ee241-91b4-4953-a69c-ce370643f47c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.326281 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6ee241-91b4-4953-a69c-ce370643f47c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5hw4v\" (UID: \"6c6ee241-91b4-4953-a69c-ce370643f47c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.326359 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjwnn\" (UniqueName: \"kubernetes.io/projected/6c6ee241-91b4-4953-a69c-ce370643f47c-kube-api-access-zjwnn\") pod \"cert-manager-cainjector-855d9ccff4-5hw4v\" (UID: \"6c6ee241-91b4-4953-a69c-ce370643f47c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.360775 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6ee241-91b4-4953-a69c-ce370643f47c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-5hw4v\" (UID: \"6c6ee241-91b4-4953-a69c-ce370643f47c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.360967 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjwnn\" (UniqueName: \"kubernetes.io/projected/6c6ee241-91b4-4953-a69c-ce370643f47c-kube-api-access-zjwnn\") pod \"cert-manager-cainjector-855d9ccff4-5hw4v\" (UID: \"6c6ee241-91b4-4953-a69c-ce370643f47c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.445494 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" Jan 26 00:23:03 crc kubenswrapper[4697]: I0126 00:23:03.981599 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v"] Jan 26 00:23:03 crc kubenswrapper[4697]: W0126 00:23:03.982512 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c6ee241_91b4_4953_a69c_ce370643f47c.slice/crio-9a221a10e8665a71e40d815888b8c2dc7addeade0770423219ebe5554de0f467 WatchSource:0}: Error finding container 9a221a10e8665a71e40d815888b8c2dc7addeade0770423219ebe5554de0f467: Status 404 returned error can't find the container with id 9a221a10e8665a71e40d815888b8c2dc7addeade0770423219ebe5554de0f467 Jan 26 00:23:04 crc kubenswrapper[4697]: I0126 00:23:04.720756 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" event={"ID":"6c6ee241-91b4-4953-a69c-ce370643f47c","Type":"ContainerStarted","Data":"9a221a10e8665a71e40d815888b8c2dc7addeade0770423219ebe5554de0f467"} Jan 26 00:23:07 crc kubenswrapper[4697]: I0126 00:23:07.857634 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7jlqj"] Jan 26 00:23:07 crc kubenswrapper[4697]: I0126 00:23:07.858873 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:07 crc kubenswrapper[4697]: I0126 00:23:07.863084 4697 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qrlqj" Jan 26 00:23:07 crc kubenswrapper[4697]: I0126 00:23:07.878743 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7jlqj"] Jan 26 00:23:07 crc kubenswrapper[4697]: I0126 00:23:07.998718 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/010dfa20-a3e3-4d17-83e2-be8dabc0f8cc-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7jlqj\" (UID: \"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:07 crc kubenswrapper[4697]: I0126 00:23:07.998826 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhgd\" (UniqueName: \"kubernetes.io/projected/010dfa20-a3e3-4d17-83e2-be8dabc0f8cc-kube-api-access-gmhgd\") pod \"cert-manager-webhook-f4fb5df64-7jlqj\" (UID: \"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:08 crc kubenswrapper[4697]: I0126 00:23:08.100306 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/010dfa20-a3e3-4d17-83e2-be8dabc0f8cc-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7jlqj\" (UID: \"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:08 crc kubenswrapper[4697]: I0126 00:23:08.100482 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhgd\" (UniqueName: \"kubernetes.io/projected/010dfa20-a3e3-4d17-83e2-be8dabc0f8cc-kube-api-access-gmhgd\") pod \"cert-manager-webhook-f4fb5df64-7jlqj\" (UID: \"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:08 crc kubenswrapper[4697]: I0126 00:23:08.121500 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmhgd\" (UniqueName: \"kubernetes.io/projected/010dfa20-a3e3-4d17-83e2-be8dabc0f8cc-kube-api-access-gmhgd\") pod \"cert-manager-webhook-f4fb5df64-7jlqj\" (UID: \"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:08 crc kubenswrapper[4697]: I0126 00:23:08.124138 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/010dfa20-a3e3-4d17-83e2-be8dabc0f8cc-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-7jlqj\" (UID: \"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:08 crc kubenswrapper[4697]: I0126 00:23:08.180752 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:08 crc kubenswrapper[4697]: I0126 00:23:08.447716 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-7jlqj"] Jan 26 00:23:08 crc kubenswrapper[4697]: W0126 00:23:08.452445 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod010dfa20_a3e3_4d17_83e2_be8dabc0f8cc.slice/crio-2dca4dfab8a5b25e6f1dc35d020b93c5da358dcff31e5a8357411e955f9b34a5 WatchSource:0}: Error finding container 2dca4dfab8a5b25e6f1dc35d020b93c5da358dcff31e5a8357411e955f9b34a5: Status 404 returned error can't find the container with id 2dca4dfab8a5b25e6f1dc35d020b93c5da358dcff31e5a8357411e955f9b34a5 Jan 26 00:23:08 crc kubenswrapper[4697]: I0126 00:23:08.743197 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" event={"ID":"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc","Type":"ContainerStarted","Data":"2dca4dfab8a5b25e6f1dc35d020b93c5da358dcff31e5a8357411e955f9b34a5"} Jan 26 00:23:10 crc kubenswrapper[4697]: I0126 00:23:10.756367 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ed09d994-82cd-421d-b986-a5a68398fc8b","Type":"ContainerStarted","Data":"20f5510d6731c03aadf98f85956becd2e5759aa03932087d5287db731cbc8307"} Jan 26 00:23:10 crc kubenswrapper[4697]: I0126 00:23:10.756904 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:23:10 crc kubenswrapper[4697]: I0126 00:23:10.758807 4697 generic.go:334] "Generic (PLEG): container finished" podID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerID="14d93e2a18793766edbbab4a2a32bd3ef3f1838fb9487c7ec8dc896c092904ec" exitCode=0 Jan 26 00:23:10 crc kubenswrapper[4697]: I0126 00:23:10.758838 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerDied","Data":"14d93e2a18793766edbbab4a2a32bd3ef3f1838fb9487c7ec8dc896c092904ec"} Jan 26 00:23:10 crc kubenswrapper[4697]: I0126 00:23:10.796001 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=14.452904664 podStartE2EDuration="1m5.795984215s" podCreationTimestamp="2026-01-26 00:22:05 +0000 UTC" firstStartedPulling="2026-01-26 00:22:06.001451206 +0000 UTC m=+867.638228596" lastFinishedPulling="2026-01-26 00:22:57.344530757 +0000 UTC m=+918.981308147" observedRunningTime="2026-01-26 00:23:10.793705899 +0000 UTC m=+932.430483299" watchObservedRunningTime="2026-01-26 00:23:10.795984215 +0000 UTC m=+932.432761625" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.474051 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rppnt"] Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.475840 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.483035 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rppnt"] Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.652476 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-catalog-content\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.652548 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-utilities\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.652582 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpbvf\" (UniqueName: \"kubernetes.io/projected/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-kube-api-access-fpbvf\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.754424 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-catalog-content\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.754541 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-utilities\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.754563 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpbvf\" (UniqueName: \"kubernetes.io/projected/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-kube-api-access-fpbvf\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.755702 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-catalog-content\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.756094 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-utilities\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:12 crc kubenswrapper[4697]: I0126 00:23:12.775173 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpbvf\" (UniqueName: \"kubernetes.io/projected/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-kube-api-access-fpbvf\") pod \"community-operators-rppnt\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:13 crc kubenswrapper[4697]: I0126 00:23:13.070048 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:14 crc kubenswrapper[4697]: I0126 00:23:14.169627 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rppnt"] Jan 26 00:23:15 crc kubenswrapper[4697]: I0126 00:23:15.293242 4697 generic.go:334] "Generic (PLEG): container finished" podID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerID="a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8" exitCode=0 Jan 26 00:23:15 crc kubenswrapper[4697]: I0126 00:23:15.293432 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rppnt" event={"ID":"b27fd779-0c7d-46ae-a467-1ccebdd1d93c","Type":"ContainerDied","Data":"a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8"} Jan 26 00:23:15 crc kubenswrapper[4697]: I0126 00:23:15.293649 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rppnt" event={"ID":"b27fd779-0c7d-46ae-a467-1ccebdd1d93c","Type":"ContainerStarted","Data":"d910bd0081d77181c6d1f8255fb8d27599c3528667b19b88b6c87bdce9acb475"} Jan 26 00:23:15 crc kubenswrapper[4697]: I0126 00:23:15.297870 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerStarted","Data":"7387220668b9d1d5ace03356de6fbbffe8ad67d998e2857c967133c2374d9d02"} Jan 26 00:23:16 crc kubenswrapper[4697]: I0126 00:23:16.519835 4697 generic.go:334] "Generic (PLEG): container finished" podID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerID="7387220668b9d1d5ace03356de6fbbffe8ad67d998e2857c967133c2374d9d02" exitCode=0 Jan 26 00:23:16 crc kubenswrapper[4697]: I0126 00:23:16.519876 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerDied","Data":"7387220668b9d1d5ace03356de6fbbffe8ad67d998e2857c967133c2374d9d02"} Jan 26 00:23:18 crc kubenswrapper[4697]: I0126 00:23:18.631588 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerStarted","Data":"b4750e8324818415c8bb39adee1bb312c43d982cd745de377d8db42c93fb362e"} Jan 26 00:23:18 crc kubenswrapper[4697]: I0126 00:23:18.667788 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=32.165753089 podStartE2EDuration="32.667761927s" podCreationTimestamp="2026-01-26 00:22:46 +0000 UTC" firstStartedPulling="2026-01-26 00:22:57.921626368 +0000 UTC m=+919.558403758" lastFinishedPulling="2026-01-26 00:22:58.423635206 +0000 UTC m=+920.060412596" observedRunningTime="2026-01-26 00:23:18.655329335 +0000 UTC m=+940.292106735" watchObservedRunningTime="2026-01-26 00:23:18.667761927 +0000 UTC m=+940.304539317" Jan 26 00:23:19 crc kubenswrapper[4697]: I0126 00:23:19.870968 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rppnt" event={"ID":"b27fd779-0c7d-46ae-a467-1ccebdd1d93c","Type":"ContainerStarted","Data":"5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4"} Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.619918 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dsc22"] Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.621070 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.625210 4697 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4jbc8" Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.638709 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dsc22"] Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.823989 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae2b5b8-6dfa-4fd4-8381-20796598c0a6-bound-sa-token\") pod \"cert-manager-86cb77c54b-dsc22\" (UID: \"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6\") " pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.824088 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c26ml\" (UniqueName: \"kubernetes.io/projected/8ae2b5b8-6dfa-4fd4-8381-20796598c0a6-kube-api-access-c26ml\") pod \"cert-manager-86cb77c54b-dsc22\" (UID: \"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6\") " pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.925196 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae2b5b8-6dfa-4fd4-8381-20796598c0a6-bound-sa-token\") pod \"cert-manager-86cb77c54b-dsc22\" (UID: \"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6\") " pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.925264 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c26ml\" (UniqueName: \"kubernetes.io/projected/8ae2b5b8-6dfa-4fd4-8381-20796598c0a6-kube-api-access-c26ml\") pod \"cert-manager-86cb77c54b-dsc22\" (UID: \"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6\") " pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.968923 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8ae2b5b8-6dfa-4fd4-8381-20796598c0a6-bound-sa-token\") pod \"cert-manager-86cb77c54b-dsc22\" (UID: \"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6\") " pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:20 crc kubenswrapper[4697]: I0126 00:23:20.971150 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c26ml\" (UniqueName: \"kubernetes.io/projected/8ae2b5b8-6dfa-4fd4-8381-20796598c0a6-kube-api-access-c26ml\") pod \"cert-manager-86cb77c54b-dsc22\" (UID: \"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6\") " pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:21 crc kubenswrapper[4697]: I0126 00:23:21.255516 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dsc22" Jan 26 00:23:21 crc kubenswrapper[4697]: I0126 00:23:21.269646 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:21 crc kubenswrapper[4697]: {"timestamp": "2026-01-26T00:23:21+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:21 crc kubenswrapper[4697]: > Jan 26 00:23:23 crc kubenswrapper[4697]: I0126 00:23:23.303656 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dsc22"] Jan 26 00:23:23 crc kubenswrapper[4697]: I0126 00:23:23.831416 4697 generic.go:334] "Generic (PLEG): container finished" podID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerID="5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4" exitCode=0 Jan 26 00:23:23 crc kubenswrapper[4697]: I0126 00:23:23.831503 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rppnt" event={"ID":"b27fd779-0c7d-46ae-a467-1ccebdd1d93c","Type":"ContainerDied","Data":"5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4"} Jan 26 00:23:23 crc kubenswrapper[4697]: I0126 00:23:23.832989 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dsc22" event={"ID":"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6","Type":"ContainerStarted","Data":"d412f3aa409b8d58bdfb52ec69068418e0386b78e699f258b1a9e83e2d8a7600"} Jan 26 00:23:25 crc kubenswrapper[4697]: I0126 00:23:25.808449 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:25 crc kubenswrapper[4697]: {"timestamp": "2026-01-26T00:23:25+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:25 crc kubenswrapper[4697]: > Jan 26 00:23:31 crc kubenswrapper[4697]: I0126 00:23:31.232901 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:31 crc kubenswrapper[4697]: {"timestamp": "2026-01-26T00:23:31+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:31 crc kubenswrapper[4697]: > Jan 26 00:23:35 crc kubenswrapper[4697]: I0126 00:23:35.888304 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:35 crc kubenswrapper[4697]: {"timestamp": "2026-01-26T00:23:35+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:35 crc kubenswrapper[4697]: > Jan 26 00:23:36 crc kubenswrapper[4697]: I0126 00:23:36.328870 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:23:36 crc kubenswrapper[4697]: I0126 00:23:36.328932 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:23:40 crc kubenswrapper[4697]: I0126 00:23:40.974883 4697 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ed09d994-82cd-421d-b986-a5a68398fc8b" containerName="elasticsearch" probeResult="failure" output=< Jan 26 00:23:40 crc kubenswrapper[4697]: {"timestamp": "2026-01-26T00:23:40+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 26 00:23:40 crc kubenswrapper[4697]: > Jan 26 00:23:42 crc kubenswrapper[4697]: E0126 00:23:42.526516 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Jan 26 00:23:42 crc kubenswrapper[4697]: E0126 00:23:42.526712 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/cainjector/cainjector],Args:[--leader-election-namespace=kube-system --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjwnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-855d9ccff4-5hw4v_cert-manager(6c6ee241-91b4-4953-a69c-ce370643f47c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:23:42 crc kubenswrapper[4697]: E0126 00:23:42.528978 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" podUID="6c6ee241-91b4-4953-a69c-ce370643f47c" Jan 26 00:23:43 crc kubenswrapper[4697]: E0126 00:23:43.485867 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" podUID="6c6ee241-91b4-4953-a69c-ce370643f47c" Jan 26 00:23:44 crc kubenswrapper[4697]: E0126 00:23:44.403944 4697 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Jan 26 00:23:44 crc kubenswrapper[4697]: E0126 00:23:44.404193 4697 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmhgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-7jlqj_cert-manager(010dfa20-a3e3-4d17-83e2-be8dabc0f8cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 00:23:44 crc kubenswrapper[4697]: E0126 00:23:44.405417 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" podUID="010dfa20-a3e3-4d17-83e2-be8dabc0f8cc" Jan 26 00:23:45 crc kubenswrapper[4697]: I0126 00:23:45.496545 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dsc22" event={"ID":"8ae2b5b8-6dfa-4fd4-8381-20796598c0a6","Type":"ContainerStarted","Data":"58a79d52f01836e2844147dc6b2c39c80894b1e62a612f4fe9024a178274d434"} Jan 26 00:23:45 crc kubenswrapper[4697]: I0126 00:23:45.498266 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rppnt" event={"ID":"b27fd779-0c7d-46ae-a467-1ccebdd1d93c","Type":"ContainerStarted","Data":"70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35"} Jan 26 00:23:45 crc kubenswrapper[4697]: I0126 00:23:45.499458 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" event={"ID":"010dfa20-a3e3-4d17-83e2-be8dabc0f8cc","Type":"ContainerStarted","Data":"aae5e73b446b7a7c2f5e6b4b5eb2ed0236b78397100ba99ad33c7ea078d904da"} Jan 26 00:23:45 crc kubenswrapper[4697]: I0126 00:23:45.499664 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:45 crc kubenswrapper[4697]: I0126 00:23:45.585379 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" podStartSLOduration=-9223371998.269415 podStartE2EDuration="38.585361163s" podCreationTimestamp="2026-01-26 00:23:07 +0000 UTC" firstStartedPulling="2026-01-26 00:23:08.454453161 +0000 UTC m=+930.091230551" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:23:45.582746947 +0000 UTC m=+967.219524357" watchObservedRunningTime="2026-01-26 00:23:45.585361163 +0000 UTC m=+967.222138553" Jan 26 00:23:45 crc kubenswrapper[4697]: I0126 00:23:45.587392 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-dsc22" podStartSLOduration=4.327411571 podStartE2EDuration="25.587386152s" podCreationTimestamp="2026-01-26 00:23:20 +0000 UTC" firstStartedPulling="2026-01-26 00:23:23.319124231 +0000 UTC m=+944.955901621" lastFinishedPulling="2026-01-26 00:23:44.579098812 +0000 UTC m=+966.215876202" observedRunningTime="2026-01-26 00:23:45.524830491 +0000 UTC m=+967.161607881" watchObservedRunningTime="2026-01-26 00:23:45.587386152 +0000 UTC m=+967.224163532" Jan 26 00:23:47 crc kubenswrapper[4697]: I0126 00:23:47.144553 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 26 00:23:47 crc kubenswrapper[4697]: I0126 00:23:47.185615 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rppnt" podStartSLOduration=5.900560564 podStartE2EDuration="35.185597296s" podCreationTimestamp="2026-01-26 00:23:12 +0000 UTC" firstStartedPulling="2026-01-26 00:23:15.294955036 +0000 UTC m=+936.931732426" lastFinishedPulling="2026-01-26 00:23:44.579991768 +0000 UTC m=+966.216769158" observedRunningTime="2026-01-26 00:23:45.612627526 +0000 UTC m=+967.249404916" watchObservedRunningTime="2026-01-26 00:23:47.185597296 +0000 UTC m=+968.822374686" Jan 26 00:23:53 crc kubenswrapper[4697]: I0126 00:23:53.152292 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:53 crc kubenswrapper[4697]: I0126 00:23:53.153158 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:53 crc kubenswrapper[4697]: I0126 00:23:53.185638 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-7jlqj" Jan 26 00:23:53 crc kubenswrapper[4697]: I0126 00:23:53.222256 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:53 crc kubenswrapper[4697]: I0126 00:23:53.867736 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:53 crc kubenswrapper[4697]: I0126 00:23:53.910656 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rppnt"] Jan 26 00:23:55 crc kubenswrapper[4697]: I0126 00:23:55.842997 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rppnt" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="registry-server" containerID="cri-o://70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35" gracePeriod=2 Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.252048 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.408966 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-utilities\") pod \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.409148 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-catalog-content\") pod \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.409202 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpbvf\" (UniqueName: \"kubernetes.io/projected/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-kube-api-access-fpbvf\") pod \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\" (UID: \"b27fd779-0c7d-46ae-a467-1ccebdd1d93c\") " Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.411171 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-utilities" (OuterVolumeSpecName: "utilities") pod "b27fd779-0c7d-46ae-a467-1ccebdd1d93c" (UID: "b27fd779-0c7d-46ae-a467-1ccebdd1d93c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.414202 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-kube-api-access-fpbvf" (OuterVolumeSpecName: "kube-api-access-fpbvf") pod "b27fd779-0c7d-46ae-a467-1ccebdd1d93c" (UID: "b27fd779-0c7d-46ae-a467-1ccebdd1d93c"). InnerVolumeSpecName "kube-api-access-fpbvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.459347 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b27fd779-0c7d-46ae-a467-1ccebdd1d93c" (UID: "b27fd779-0c7d-46ae-a467-1ccebdd1d93c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.512413 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.512481 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.512498 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpbvf\" (UniqueName: \"kubernetes.io/projected/b27fd779-0c7d-46ae-a467-1ccebdd1d93c-kube-api-access-fpbvf\") on node \"crc\" DevicePath \"\"" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.850566 4697 generic.go:334] "Generic (PLEG): container finished" podID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerID="70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35" exitCode=0 Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.850610 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rppnt" event={"ID":"b27fd779-0c7d-46ae-a467-1ccebdd1d93c","Type":"ContainerDied","Data":"70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35"} Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.850633 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rppnt" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.850646 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rppnt" event={"ID":"b27fd779-0c7d-46ae-a467-1ccebdd1d93c","Type":"ContainerDied","Data":"d910bd0081d77181c6d1f8255fb8d27599c3528667b19b88b6c87bdce9acb475"} Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.850668 4697 scope.go:117] "RemoveContainer" containerID="70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.865058 4697 scope.go:117] "RemoveContainer" containerID="5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.869324 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rppnt"] Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.874145 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rppnt"] Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.882641 4697 scope.go:117] "RemoveContainer" containerID="a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.901212 4697 scope.go:117] "RemoveContainer" containerID="70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35" Jan 26 00:23:56 crc kubenswrapper[4697]: E0126 00:23:56.901622 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35\": container with ID starting with 70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35 not found: ID does not exist" containerID="70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.901664 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35"} err="failed to get container status \"70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35\": rpc error: code = NotFound desc = could not find container \"70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35\": container with ID starting with 70b6de78df589bfccd38e73ad412fdb62b8d93a4e2459cb46bcff0a997e59a35 not found: ID does not exist" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.901687 4697 scope.go:117] "RemoveContainer" containerID="5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4" Jan 26 00:23:56 crc kubenswrapper[4697]: E0126 00:23:56.902022 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4\": container with ID starting with 5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4 not found: ID does not exist" containerID="5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.902050 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4"} err="failed to get container status \"5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4\": rpc error: code = NotFound desc = could not find container \"5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4\": container with ID starting with 5a1c3b81b68c4313fb894f9c9a55ba6c12498a8d6f3059a41bf3e5ef94c85ec4 not found: ID does not exist" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.902090 4697 scope.go:117] "RemoveContainer" containerID="a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8" Jan 26 00:23:56 crc kubenswrapper[4697]: E0126 00:23:56.902700 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8\": container with ID starting with a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8 not found: ID does not exist" containerID="a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8" Jan 26 00:23:56 crc kubenswrapper[4697]: I0126 00:23:56.902726 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8"} err="failed to get container status \"a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8\": rpc error: code = NotFound desc = could not find container \"a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8\": container with ID starting with a354eda52393029826831323b29b084f3eeba51b6c442c94a211b9bf4c73a4b8 not found: ID does not exist" Jan 26 00:23:58 crc kubenswrapper[4697]: I0126 00:23:58.670384 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" path="/var/lib/kubelet/pods/b27fd779-0c7d-46ae-a467-1ccebdd1d93c/volumes" Jan 26 00:23:58 crc kubenswrapper[4697]: I0126 00:23:58.875406 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" event={"ID":"6c6ee241-91b4-4953-a69c-ce370643f47c","Type":"ContainerStarted","Data":"08605f23ea0d90ca32bbd5f24c40eade16ccf7b804bbcc1e0379e211c552a14f"} Jan 26 00:23:58 crc kubenswrapper[4697]: I0126 00:23:58.892870 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-5hw4v" podStartSLOduration=-9223371980.961924 podStartE2EDuration="55.892852095s" podCreationTimestamp="2026-01-26 00:23:03 +0000 UTC" firstStartedPulling="2026-01-26 00:23:03.985994889 +0000 UTC m=+925.622772279" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:23:58.891626391 +0000 UTC m=+980.528403791" watchObservedRunningTime="2026-01-26 00:23:58.892852095 +0000 UTC m=+980.529629485" Jan 26 00:24:06 crc kubenswrapper[4697]: I0126 00:24:06.328552 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:06 crc kubenswrapper[4697]: I0126 00:24:06.329122 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:36 crc kubenswrapper[4697]: I0126 00:24:36.328910 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:24:36 crc kubenswrapper[4697]: I0126 00:24:36.331124 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:24:36 crc kubenswrapper[4697]: I0126 00:24:36.331291 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:24:36 crc kubenswrapper[4697]: I0126 00:24:36.332046 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0256a7ce9795310ac1a75ce0ad16e52fa596c733c21aa020113015124b65b713"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:24:36 crc kubenswrapper[4697]: I0126 00:24:36.332276 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://0256a7ce9795310ac1a75ce0ad16e52fa596c733c21aa020113015124b65b713" gracePeriod=600 Jan 26 00:24:40 crc kubenswrapper[4697]: I0126 00:24:40.653545 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="0256a7ce9795310ac1a75ce0ad16e52fa596c733c21aa020113015124b65b713" exitCode=0 Jan 26 00:24:40 crc kubenswrapper[4697]: I0126 00:24:40.653589 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"0256a7ce9795310ac1a75ce0ad16e52fa596c733c21aa020113015124b65b713"} Jan 26 00:24:40 crc kubenswrapper[4697]: I0126 00:24:40.653882 4697 scope.go:117] "RemoveContainer" containerID="ed5dd1945eec0f6d970778673fb89939e592f26c1f170f55c1d612f6dec2ea84" Jan 26 00:24:43 crc kubenswrapper[4697]: I0126 00:24:43.698559 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"9ca5196ed53511183c7e6bd35f2334806b8a3c6b9ff7736fb9da29c9d2b42d54"} Jan 26 00:25:15 crc kubenswrapper[4697]: I0126 00:25:15.900451 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_06a59088-4a71-4b38-ae32-ef82bec71c07/docker-build/0.log" Jan 26 00:25:15 crc kubenswrapper[4697]: I0126 00:25:15.903267 4697 generic.go:334] "Generic (PLEG): container finished" podID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerID="b4750e8324818415c8bb39adee1bb312c43d982cd745de377d8db42c93fb362e" exitCode=1 Jan 26 00:25:15 crc kubenswrapper[4697]: I0126 00:25:15.903356 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerDied","Data":"b4750e8324818415c8bb39adee1bb312c43d982cd745de377d8db42c93fb362e"} Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.119102 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_06a59088-4a71-4b38-ae32-ef82bec71c07/docker-build/0.log" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.120175 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248554 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-root\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248611 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-buildcachedir\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248706 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-pull\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248741 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwqt5\" (UniqueName: \"kubernetes.io/projected/06a59088-4a71-4b38-ae32-ef82bec71c07-kube-api-access-mwqt5\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248779 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-run\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248807 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-node-pullsecrets\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248833 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-proxy-ca-bundles\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248876 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-push\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248901 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-build-blob-cache\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248934 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-system-configs\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248965 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-buildworkdir\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.248987 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-ca-bundles\") pod \"06a59088-4a71-4b38-ae32-ef82bec71c07\" (UID: \"06a59088-4a71-4b38-ae32-ef82bec71c07\") " Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.249360 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.250062 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.250427 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.250449 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.251115 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.251500 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.255266 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-pull" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-pull") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "builder-dockercfg-pfrjl-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.255294 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-push" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-push") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "builder-dockercfg-pfrjl-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.256279 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a59088-4a71-4b38-ae32-ef82bec71c07-kube-api-access-mwqt5" (OuterVolumeSpecName: "kube-api-access-mwqt5") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "kube-api-access-mwqt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.281464 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351155 4697 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351201 4697 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351210 4697 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351219 4697 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351229 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351239 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwqt5\" (UniqueName: \"kubernetes.io/projected/06a59088-4a71-4b38-ae32-ef82bec71c07-kube-api-access-mwqt5\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351247 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351259 4697 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/06a59088-4a71-4b38-ae32-ef82bec71c07-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351269 4697 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/06a59088-4a71-4b38-ae32-ef82bec71c07-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.351277 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/06a59088-4a71-4b38-ae32-ef82bec71c07-builder-dockercfg-pfrjl-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.416501 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.453092 4697 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.918995 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_06a59088-4a71-4b38-ae32-ef82bec71c07/docker-build/0.log" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.920366 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"06a59088-4a71-4b38-ae32-ef82bec71c07","Type":"ContainerDied","Data":"e204e8b283552c7cd1c90268181df7c6dbc59f90943ec0d24755237e843c7928"} Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.920435 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e204e8b283552c7cd1c90268181df7c6dbc59f90943ec0d24755237e843c7928" Jan 26 00:25:17 crc kubenswrapper[4697]: I0126 00:25:17.920474 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 26 00:25:18 crc kubenswrapper[4697]: I0126 00:25:18.961307 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "06a59088-4a71-4b38-ae32-ef82bec71c07" (UID: "06a59088-4a71-4b38-ae32-ef82bec71c07"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:25:18 crc kubenswrapper[4697]: I0126 00:25:18.974889 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/06a59088-4a71-4b38-ae32-ef82bec71c07-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.731966 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:25:27 crc kubenswrapper[4697]: E0126 00:25:27.733053 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="registry-server" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733095 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="registry-server" Jan 26 00:25:27 crc kubenswrapper[4697]: E0126 00:25:27.733112 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerName="docker-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733121 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerName="docker-build" Jan 26 00:25:27 crc kubenswrapper[4697]: E0126 00:25:27.733130 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="extract-content" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733139 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="extract-content" Jan 26 00:25:27 crc kubenswrapper[4697]: E0126 00:25:27.733153 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerName="git-clone" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733162 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerName="git-clone" Jan 26 00:25:27 crc kubenswrapper[4697]: E0126 00:25:27.733179 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerName="manage-dockerfile" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733187 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerName="manage-dockerfile" Jan 26 00:25:27 crc kubenswrapper[4697]: E0126 00:25:27.733205 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="extract-utilities" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733212 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="extract-utilities" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733341 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27fd779-0c7d-46ae-a467-1ccebdd1d93c" containerName="registry-server" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.733370 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a59088-4a71-4b38-ae32-ef82bec71c07" containerName="docker-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.734331 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.736211 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-sys-config" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.736717 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-ca" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.736976 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-global-ca" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.737960 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-pfrjl" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.752335 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.795965 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796403 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k286\" (UniqueName: \"kubernetes.io/projected/547207f9-806d-4fbe-8313-747a6db36eb7-kube-api-access-5k286\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796578 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796623 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796649 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796698 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796715 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796826 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796892 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796927 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.796952 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.797013 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898350 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898435 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k286\" (UniqueName: \"kubernetes.io/projected/547207f9-806d-4fbe-8313-747a6db36eb7-kube-api-access-5k286\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898469 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898488 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898517 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898548 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898568 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898595 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898618 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898636 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898654 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898679 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.898762 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.899178 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.899283 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.899766 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.900107 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.900203 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.900265 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.900352 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.900672 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.903647 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.904658 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:27 crc kubenswrapper[4697]: I0126 00:25:27.920759 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k286\" (UniqueName: \"kubernetes.io/projected/547207f9-806d-4fbe-8313-747a6db36eb7-kube-api-access-5k286\") pod \"service-telemetry-operator-3-build\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:28 crc kubenswrapper[4697]: I0126 00:25:28.052167 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:25:28 crc kubenswrapper[4697]: I0126 00:25:28.334191 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Jan 26 00:25:29 crc kubenswrapper[4697]: I0126 00:25:29.001409 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"547207f9-806d-4fbe-8313-747a6db36eb7","Type":"ContainerStarted","Data":"ac4c0ded83ad154b53d10d8a58647b6ab1d51b44e7d046c256699b6f7e7ff7ad"} Jan 26 00:25:29 crc kubenswrapper[4697]: I0126 00:25:29.001763 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"547207f9-806d-4fbe-8313-747a6db36eb7","Type":"ContainerStarted","Data":"b1a74c8246b14a19bbc9fdf8ef901cd48e838cc6fefeeab7e4c1ac056cdf7e38"} Jan 26 00:25:37 crc kubenswrapper[4697]: I0126 00:25:37.058285 4697 generic.go:334] "Generic (PLEG): container finished" podID="547207f9-806d-4fbe-8313-747a6db36eb7" containerID="ac4c0ded83ad154b53d10d8a58647b6ab1d51b44e7d046c256699b6f7e7ff7ad" exitCode=0 Jan 26 00:25:37 crc kubenswrapper[4697]: I0126 00:25:37.058401 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"547207f9-806d-4fbe-8313-747a6db36eb7","Type":"ContainerDied","Data":"ac4c0ded83ad154b53d10d8a58647b6ab1d51b44e7d046c256699b6f7e7ff7ad"} Jan 26 00:25:38 crc kubenswrapper[4697]: I0126 00:25:38.067002 4697 generic.go:334] "Generic (PLEG): container finished" podID="547207f9-806d-4fbe-8313-747a6db36eb7" containerID="c4c14f27fc30b5704938e447ac326d7a51a69968ec7ea815b150f99706c0d3ed" exitCode=0 Jan 26 00:25:38 crc kubenswrapper[4697]: I0126 00:25:38.067090 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"547207f9-806d-4fbe-8313-747a6db36eb7","Type":"ContainerDied","Data":"c4c14f27fc30b5704938e447ac326d7a51a69968ec7ea815b150f99706c0d3ed"} Jan 26 00:25:38 crc kubenswrapper[4697]: I0126 00:25:38.099635 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_547207f9-806d-4fbe-8313-747a6db36eb7/manage-dockerfile/0.log" Jan 26 00:25:42 crc kubenswrapper[4697]: I0126 00:25:42.096854 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"547207f9-806d-4fbe-8313-747a6db36eb7","Type":"ContainerStarted","Data":"b33d2df4845f991904c1c59a31652e2e0e011aad40aee64012cc5139df9c0bc1"} Jan 26 00:25:43 crc kubenswrapper[4697]: I0126 00:25:43.141208 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-3-build" podStartSLOduration=16.141189369 podStartE2EDuration="16.141189369s" podCreationTimestamp="2026-01-26 00:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:25:43.137107794 +0000 UTC m=+1084.773885184" watchObservedRunningTime="2026-01-26 00:25:43.141189369 +0000 UTC m=+1084.777966759" Jan 26 00:27:06 crc kubenswrapper[4697]: I0126 00:27:06.328255 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:06 crc kubenswrapper[4697]: I0126 00:27:06.328883 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:36 crc kubenswrapper[4697]: I0126 00:27:36.328310 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:27:36 crc kubenswrapper[4697]: I0126 00:27:36.328817 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:27:41 crc kubenswrapper[4697]: I0126 00:27:41.625162 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_547207f9-806d-4fbe-8313-747a6db36eb7/docker-build/0.log" Jan 26 00:27:41 crc kubenswrapper[4697]: I0126 00:27:41.627219 4697 generic.go:334] "Generic (PLEG): container finished" podID="547207f9-806d-4fbe-8313-747a6db36eb7" containerID="b33d2df4845f991904c1c59a31652e2e0e011aad40aee64012cc5139df9c0bc1" exitCode=1 Jan 26 00:27:41 crc kubenswrapper[4697]: I0126 00:27:41.627308 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"547207f9-806d-4fbe-8313-747a6db36eb7","Type":"ContainerDied","Data":"b33d2df4845f991904c1c59a31652e2e0e011aad40aee64012cc5139df9c0bc1"} Jan 26 00:27:42 crc kubenswrapper[4697]: I0126 00:27:42.914846 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_547207f9-806d-4fbe-8313-747a6db36eb7/docker-build/0.log" Jan 26 00:27:42 crc kubenswrapper[4697]: I0126 00:27:42.915984 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.042838 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-buildcachedir\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.042917 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-system-configs\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.042972 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-root\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043016 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-ca-bundles\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043025 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043056 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-run\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043131 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-buildworkdir\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043174 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-node-pullsecrets\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043265 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-pull\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043352 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-build-blob-cache\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043412 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-proxy-ca-bundles\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043441 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-push\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.043501 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k286\" (UniqueName: \"kubernetes.io/projected/547207f9-806d-4fbe-8313-747a6db36eb7-kube-api-access-5k286\") pod \"547207f9-806d-4fbe-8313-747a6db36eb7\" (UID: \"547207f9-806d-4fbe-8313-747a6db36eb7\") " Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.059303 4697 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.059432 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.060250 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.060284 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.066221 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.070638 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.076475 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-pull" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-pull") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "builder-dockercfg-pfrjl-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.076631 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-push" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-push") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "builder-dockercfg-pfrjl-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.076715 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/547207f9-806d-4fbe-8313-747a6db36eb7-kube-api-access-5k286" (OuterVolumeSpecName: "kube-api-access-5k286") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "kube-api-access-5k286". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.103796 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161309 4697 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161347 4697 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/547207f9-806d-4fbe-8313-747a6db36eb7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161358 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161372 4697 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161384 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/547207f9-806d-4fbe-8313-747a6db36eb7-builder-dockercfg-pfrjl-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161394 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k286\" (UniqueName: \"kubernetes.io/projected/547207f9-806d-4fbe-8313-747a6db36eb7-kube-api-access-5k286\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161405 4697 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161417 4697 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/547207f9-806d-4fbe-8313-747a6db36eb7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.161427 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.253124 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.262179 4697 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.639409 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_547207f9-806d-4fbe-8313-747a6db36eb7/docker-build/0.log" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.640243 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"547207f9-806d-4fbe-8313-747a6db36eb7","Type":"ContainerDied","Data":"b1a74c8246b14a19bbc9fdf8ef901cd48e838cc6fefeeab7e4c1ac056cdf7e38"} Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.640276 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a74c8246b14a19bbc9fdf8ef901cd48e838cc6fefeeab7e4c1ac056cdf7e38" Jan 26 00:27:43 crc kubenswrapper[4697]: I0126 00:27:43.640404 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Jan 26 00:27:44 crc kubenswrapper[4697]: I0126 00:27:44.973367 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "547207f9-806d-4fbe-8313-747a6db36eb7" (UID: "547207f9-806d-4fbe-8313-747a6db36eb7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:27:44 crc kubenswrapper[4697]: I0126 00:27:44.984023 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/547207f9-806d-4fbe-8313-747a6db36eb7-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.858771 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:27:53 crc kubenswrapper[4697]: E0126 00:27:53.859532 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547207f9-806d-4fbe-8313-747a6db36eb7" containerName="docker-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.859553 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="547207f9-806d-4fbe-8313-747a6db36eb7" containerName="docker-build" Jan 26 00:27:53 crc kubenswrapper[4697]: E0126 00:27:53.859571 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547207f9-806d-4fbe-8313-747a6db36eb7" containerName="git-clone" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.859577 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="547207f9-806d-4fbe-8313-747a6db36eb7" containerName="git-clone" Jan 26 00:27:53 crc kubenswrapper[4697]: E0126 00:27:53.859590 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="547207f9-806d-4fbe-8313-747a6db36eb7" containerName="manage-dockerfile" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.859599 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="547207f9-806d-4fbe-8313-747a6db36eb7" containerName="manage-dockerfile" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.859704 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="547207f9-806d-4fbe-8313-747a6db36eb7" containerName="docker-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.860562 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.863167 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-ca" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.863485 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-sys-config" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.863533 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-global-ca" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.865331 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-pfrjl" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.885232 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914230 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914321 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914351 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914387 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914423 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914447 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914612 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5s7d\" (UniqueName: \"kubernetes.io/projected/acc2c15b-c138-439b-81da-7235084910c6-kube-api-access-j5s7d\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914746 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914817 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914908 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.914979 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:53 crc kubenswrapper[4697]: I0126 00:27:53.915413 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016617 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016687 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016716 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016744 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016765 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016787 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5s7d\" (UniqueName: \"kubernetes.io/projected/acc2c15b-c138-439b-81da-7235084910c6-kube-api-access-j5s7d\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016820 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016841 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016867 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016888 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016935 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.016975 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.017392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.017392 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.017656 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.017717 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.017998 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.018175 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.018237 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.018239 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.018326 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.030781 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.030985 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.033153 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5s7d\" (UniqueName: \"kubernetes.io/projected/acc2c15b-c138-439b-81da-7235084910c6-kube-api-access-j5s7d\") pod \"service-telemetry-operator-4-build\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.177388 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.374926 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Jan 26 00:27:54 crc kubenswrapper[4697]: I0126 00:27:54.714413 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"acc2c15b-c138-439b-81da-7235084910c6","Type":"ContainerStarted","Data":"4f4a69fff8d873244ba863c300d4f52eb72b7b338ff7d9a57ec642712758f2be"} Jan 26 00:27:55 crc kubenswrapper[4697]: I0126 00:27:55.724634 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"acc2c15b-c138-439b-81da-7235084910c6","Type":"ContainerStarted","Data":"1b065722678ed3015d562847a53dde42a88d2cd09efe919c97239313cb6c6999"} Jan 26 00:28:02 crc kubenswrapper[4697]: I0126 00:28:02.766297 4697 generic.go:334] "Generic (PLEG): container finished" podID="acc2c15b-c138-439b-81da-7235084910c6" containerID="1b065722678ed3015d562847a53dde42a88d2cd09efe919c97239313cb6c6999" exitCode=0 Jan 26 00:28:02 crc kubenswrapper[4697]: I0126 00:28:02.766397 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"acc2c15b-c138-439b-81da-7235084910c6","Type":"ContainerDied","Data":"1b065722678ed3015d562847a53dde42a88d2cd09efe919c97239313cb6c6999"} Jan 26 00:28:03 crc kubenswrapper[4697]: I0126 00:28:03.774852 4697 generic.go:334] "Generic (PLEG): container finished" podID="acc2c15b-c138-439b-81da-7235084910c6" containerID="61b50677d87e329ca1b88db1d0bc5996d9699d94ce107fa7decf8df1f62e9efa" exitCode=0 Jan 26 00:28:03 crc kubenswrapper[4697]: I0126 00:28:03.775141 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"acc2c15b-c138-439b-81da-7235084910c6","Type":"ContainerDied","Data":"61b50677d87e329ca1b88db1d0bc5996d9699d94ce107fa7decf8df1f62e9efa"} Jan 26 00:28:03 crc kubenswrapper[4697]: I0126 00:28:03.827890 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_acc2c15b-c138-439b-81da-7235084910c6/manage-dockerfile/0.log" Jan 26 00:28:04 crc kubenswrapper[4697]: I0126 00:28:04.787977 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"acc2c15b-c138-439b-81da-7235084910c6","Type":"ContainerStarted","Data":"377ac280593129f62041ea6f44a44b94a76e4cd5e17b303b0b8e74a7625c30a5"} Jan 26 00:28:04 crc kubenswrapper[4697]: I0126 00:28:04.831732 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-4-build" podStartSLOduration=11.831714349 podStartE2EDuration="11.831714349s" podCreationTimestamp="2026-01-26 00:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:28:04.823247325 +0000 UTC m=+1226.460024735" watchObservedRunningTime="2026-01-26 00:28:04.831714349 +0000 UTC m=+1226.468491739" Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.328247 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.328585 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.328629 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.329093 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ca5196ed53511183c7e6bd35f2334806b8a3c6b9ff7736fb9da29c9d2b42d54"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.329165 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://9ca5196ed53511183c7e6bd35f2334806b8a3c6b9ff7736fb9da29c9d2b42d54" gracePeriod=600 Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.801180 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="9ca5196ed53511183c7e6bd35f2334806b8a3c6b9ff7736fb9da29c9d2b42d54" exitCode=0 Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.801237 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"9ca5196ed53511183c7e6bd35f2334806b8a3c6b9ff7736fb9da29c9d2b42d54"} Jan 26 00:28:06 crc kubenswrapper[4697]: I0126 00:28:06.801273 4697 scope.go:117] "RemoveContainer" containerID="0256a7ce9795310ac1a75ce0ad16e52fa596c733c21aa020113015124b65b713" Jan 26 00:28:07 crc kubenswrapper[4697]: I0126 00:28:07.809140 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"e9651ce93801261f4a6086ed411567351d504713cd692eafdc7b113cf23c118d"} Jan 26 00:29:30 crc kubenswrapper[4697]: I0126 00:29:30.340627 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_acc2c15b-c138-439b-81da-7235084910c6/docker-build/0.log" Jan 26 00:29:30 crc kubenswrapper[4697]: I0126 00:29:30.341841 4697 generic.go:334] "Generic (PLEG): container finished" podID="acc2c15b-c138-439b-81da-7235084910c6" containerID="377ac280593129f62041ea6f44a44b94a76e4cd5e17b303b0b8e74a7625c30a5" exitCode=1 Jan 26 00:29:30 crc kubenswrapper[4697]: I0126 00:29:30.341874 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"acc2c15b-c138-439b-81da-7235084910c6","Type":"ContainerDied","Data":"377ac280593129f62041ea6f44a44b94a76e4cd5e17b303b0b8e74a7625c30a5"} Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.588346 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_acc2c15b-c138-439b-81da-7235084910c6/docker-build/0.log" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.589690 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.620949 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-node-pullsecrets\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.621133 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-pull\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.621134 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.621166 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-build-blob-cache\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.621206 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-ca-bundles\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.621328 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5s7d\" (UniqueName: \"kubernetes.io/projected/acc2c15b-c138-439b-81da-7235084910c6-kube-api-access-j5s7d\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.621361 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-proxy-ca-bundles\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.622446 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-root\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.622543 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-push\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.622592 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-buildworkdir\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.622621 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-system-configs\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.622659 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-run\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.622709 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-buildcachedir\") pod \"acc2c15b-c138-439b-81da-7235084910c6\" (UID: \"acc2c15b-c138-439b-81da-7235084910c6\") " Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.623316 4697 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.623381 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.623378 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.623765 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.623677 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.624580 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.630323 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-push" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-push") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "builder-dockercfg-pfrjl-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.630360 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-pull" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-pull") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "builder-dockercfg-pfrjl-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.631331 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc2c15b-c138-439b-81da-7235084910c6-kube-api-access-j5s7d" (OuterVolumeSpecName: "kube-api-access-j5s7d") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "kube-api-access-j5s7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.661676 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725042 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725454 4697 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725467 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5s7d\" (UniqueName: \"kubernetes.io/projected/acc2c15b-c138-439b-81da-7235084910c6-kube-api-access-j5s7d\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725479 4697 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725489 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/acc2c15b-c138-439b-81da-7235084910c6-builder-dockercfg-pfrjl-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725502 4697 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/acc2c15b-c138-439b-81da-7235084910c6-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725513 4697 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725525 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.725535 4697 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/acc2c15b-c138-439b-81da-7235084910c6-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.806810 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:29:31 crc kubenswrapper[4697]: I0126 00:29:31.826290 4697 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:32 crc kubenswrapper[4697]: I0126 00:29:32.364651 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_acc2c15b-c138-439b-81da-7235084910c6/docker-build/0.log" Jan 26 00:29:32 crc kubenswrapper[4697]: I0126 00:29:32.366097 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"acc2c15b-c138-439b-81da-7235084910c6","Type":"ContainerDied","Data":"4f4a69fff8d873244ba863c300d4f52eb72b7b338ff7d9a57ec642712758f2be"} Jan 26 00:29:32 crc kubenswrapper[4697]: I0126 00:29:32.366148 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4a69fff8d873244ba863c300d4f52eb72b7b338ff7d9a57ec642712758f2be" Jan 26 00:29:32 crc kubenswrapper[4697]: I0126 00:29:32.366154 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Jan 26 00:29:33 crc kubenswrapper[4697]: I0126 00:29:33.387505 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "acc2c15b-c138-439b-81da-7235084910c6" (UID: "acc2c15b-c138-439b-81da-7235084910c6"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:29:33 crc kubenswrapper[4697]: I0126 00:29:33.461297 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/acc2c15b-c138-439b-81da-7235084910c6-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.104902 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:29:42 crc kubenswrapper[4697]: E0126 00:29:42.105681 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc2c15b-c138-439b-81da-7235084910c6" containerName="git-clone" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.105697 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc2c15b-c138-439b-81da-7235084910c6" containerName="git-clone" Jan 26 00:29:42 crc kubenswrapper[4697]: E0126 00:29:42.105718 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc2c15b-c138-439b-81da-7235084910c6" containerName="manage-dockerfile" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.105727 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc2c15b-c138-439b-81da-7235084910c6" containerName="manage-dockerfile" Jan 26 00:29:42 crc kubenswrapper[4697]: E0126 00:29:42.105741 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc2c15b-c138-439b-81da-7235084910c6" containerName="docker-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.105749 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc2c15b-c138-439b-81da-7235084910c6" containerName="docker-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.105905 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc2c15b-c138-439b-81da-7235084910c6" containerName="docker-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.107125 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.115662 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-sys-config" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.116913 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-ca" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.117066 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-global-ca" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.117717 4697 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-pfrjl" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.133763 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.245900 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246017 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246067 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9jn\" (UniqueName: \"kubernetes.io/projected/d11bce09-6d09-4433-b564-a88d5cc34f35-kube-api-access-qs9jn\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246140 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246176 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246235 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246271 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246408 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246493 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246592 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246683 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.246788 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.347916 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.347967 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.347997 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.348036 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.348060 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.348133 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.348861 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349007 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349013 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349124 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349160 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349239 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349247 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349311 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349373 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9jn\" (UniqueName: \"kubernetes.io/projected/d11bce09-6d09-4433-b564-a88d5cc34f35-kube-api-access-qs9jn\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349453 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.349497 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.350121 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.350224 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.350243 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.350501 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.353755 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-push\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.363228 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.376871 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9jn\" (UniqueName: \"kubernetes.io/projected/d11bce09-6d09-4433-b564-a88d5cc34f35-kube-api-access-qs9jn\") pod \"service-telemetry-operator-5-build\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.432400 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:29:42 crc kubenswrapper[4697]: I0126 00:29:42.615454 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Jan 26 00:29:43 crc kubenswrapper[4697]: I0126 00:29:43.441145 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"d11bce09-6d09-4433-b564-a88d5cc34f35","Type":"ContainerStarted","Data":"24f20819ab72601b250b85bdbb72a4cec93d7cd4c34109f81b07ffad743fb31f"} Jan 26 00:29:43 crc kubenswrapper[4697]: I0126 00:29:43.441444 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"d11bce09-6d09-4433-b564-a88d5cc34f35","Type":"ContainerStarted","Data":"9f6beda9361b1abc8efe157c59804dff0ee702f72b452a9c4002e8ff7d9358f4"} Jan 26 00:29:51 crc kubenswrapper[4697]: I0126 00:29:51.496051 4697 generic.go:334] "Generic (PLEG): container finished" podID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerID="24f20819ab72601b250b85bdbb72a4cec93d7cd4c34109f81b07ffad743fb31f" exitCode=0 Jan 26 00:29:51 crc kubenswrapper[4697]: I0126 00:29:51.496181 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"d11bce09-6d09-4433-b564-a88d5cc34f35","Type":"ContainerDied","Data":"24f20819ab72601b250b85bdbb72a4cec93d7cd4c34109f81b07ffad743fb31f"} Jan 26 00:29:52 crc kubenswrapper[4697]: I0126 00:29:52.504601 4697 generic.go:334] "Generic (PLEG): container finished" podID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerID="66ef7cdb61b73dafbe219c2c9d916d73c2322a129db31f9a5738af2b36b32eec" exitCode=0 Jan 26 00:29:52 crc kubenswrapper[4697]: I0126 00:29:52.504689 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"d11bce09-6d09-4433-b564-a88d5cc34f35","Type":"ContainerDied","Data":"66ef7cdb61b73dafbe219c2c9d916d73c2322a129db31f9a5738af2b36b32eec"} Jan 26 00:29:52 crc kubenswrapper[4697]: I0126 00:29:52.550503 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_d11bce09-6d09-4433-b564-a88d5cc34f35/manage-dockerfile/0.log" Jan 26 00:29:53 crc kubenswrapper[4697]: I0126 00:29:53.513858 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"d11bce09-6d09-4433-b564-a88d5cc34f35","Type":"ContainerStarted","Data":"202c1f40e4caf137f939275817f06f60c300fcec50d6e54eecfbcaeeecff2d1e"} Jan 26 00:29:53 crc kubenswrapper[4697]: I0126 00:29:53.551383 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-5-build" podStartSLOduration=11.551359676 podStartE2EDuration="11.551359676s" podCreationTimestamp="2026-01-26 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 00:29:53.544362106 +0000 UTC m=+1335.181139506" watchObservedRunningTime="2026-01-26 00:29:53.551359676 +0000 UTC m=+1335.188137086" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.144194 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg"] Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.145938 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.149171 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.149190 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.153335 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg"] Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.295457 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3540d408-6fff-4d69-b439-b1a670a69183-config-volume\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.295658 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbwmw\" (UniqueName: \"kubernetes.io/projected/3540d408-6fff-4d69-b439-b1a670a69183-kube-api-access-dbwmw\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.295751 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3540d408-6fff-4d69-b439-b1a670a69183-secret-volume\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.396739 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3540d408-6fff-4d69-b439-b1a670a69183-config-volume\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.396831 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbwmw\" (UniqueName: \"kubernetes.io/projected/3540d408-6fff-4d69-b439-b1a670a69183-kube-api-access-dbwmw\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.396869 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3540d408-6fff-4d69-b439-b1a670a69183-secret-volume\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.401198 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3540d408-6fff-4d69-b439-b1a670a69183-config-volume\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.403206 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3540d408-6fff-4d69-b439-b1a670a69183-secret-volume\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.423771 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbwmw\" (UniqueName: \"kubernetes.io/projected/3540d408-6fff-4d69-b439-b1a670a69183-kube-api-access-dbwmw\") pod \"collect-profiles-29489790-cfqvg\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.473416 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:00 crc kubenswrapper[4697]: I0126 00:30:00.654975 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg"] Jan 26 00:30:01 crc kubenswrapper[4697]: I0126 00:30:01.570765 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" event={"ID":"3540d408-6fff-4d69-b439-b1a670a69183","Type":"ContainerStarted","Data":"580542f7ed63c6dc0b50ef89d9aa2c6bfb6a5c7040d3c53d881e5ec72d6976ab"} Jan 26 00:30:05 crc kubenswrapper[4697]: I0126 00:30:05.598946 4697 generic.go:334] "Generic (PLEG): container finished" podID="3540d408-6fff-4d69-b439-b1a670a69183" containerID="1a5b69bd7f11d1ceb2a6b9b075245626131c5d62bbd02c0e948f23c8babfa4d3" exitCode=0 Jan 26 00:30:05 crc kubenswrapper[4697]: I0126 00:30:05.598983 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" event={"ID":"3540d408-6fff-4d69-b439-b1a670a69183","Type":"ContainerDied","Data":"1a5b69bd7f11d1ceb2a6b9b075245626131c5d62bbd02c0e948f23c8babfa4d3"} Jan 26 00:30:06 crc kubenswrapper[4697]: I0126 00:30:06.841760 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:06 crc kubenswrapper[4697]: I0126 00:30:06.982876 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3540d408-6fff-4d69-b439-b1a670a69183-config-volume\") pod \"3540d408-6fff-4d69-b439-b1a670a69183\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " Jan 26 00:30:06 crc kubenswrapper[4697]: I0126 00:30:06.983003 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbwmw\" (UniqueName: \"kubernetes.io/projected/3540d408-6fff-4d69-b439-b1a670a69183-kube-api-access-dbwmw\") pod \"3540d408-6fff-4d69-b439-b1a670a69183\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " Jan 26 00:30:06 crc kubenswrapper[4697]: I0126 00:30:06.983061 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3540d408-6fff-4d69-b439-b1a670a69183-secret-volume\") pod \"3540d408-6fff-4d69-b439-b1a670a69183\" (UID: \"3540d408-6fff-4d69-b439-b1a670a69183\") " Jan 26 00:30:06 crc kubenswrapper[4697]: I0126 00:30:06.983461 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3540d408-6fff-4d69-b439-b1a670a69183-config-volume" (OuterVolumeSpecName: "config-volume") pod "3540d408-6fff-4d69-b439-b1a670a69183" (UID: "3540d408-6fff-4d69-b439-b1a670a69183"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:30:06 crc kubenswrapper[4697]: I0126 00:30:06.988492 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3540d408-6fff-4d69-b439-b1a670a69183-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3540d408-6fff-4d69-b439-b1a670a69183" (UID: "3540d408-6fff-4d69-b439-b1a670a69183"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:30:06 crc kubenswrapper[4697]: I0126 00:30:06.988638 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3540d408-6fff-4d69-b439-b1a670a69183-kube-api-access-dbwmw" (OuterVolumeSpecName: "kube-api-access-dbwmw") pod "3540d408-6fff-4d69-b439-b1a670a69183" (UID: "3540d408-6fff-4d69-b439-b1a670a69183"). InnerVolumeSpecName "kube-api-access-dbwmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:30:07 crc kubenswrapper[4697]: I0126 00:30:07.084106 4697 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3540d408-6fff-4d69-b439-b1a670a69183-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:07 crc kubenswrapper[4697]: I0126 00:30:07.084162 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbwmw\" (UniqueName: \"kubernetes.io/projected/3540d408-6fff-4d69-b439-b1a670a69183-kube-api-access-dbwmw\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:07 crc kubenswrapper[4697]: I0126 00:30:07.084178 4697 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3540d408-6fff-4d69-b439-b1a670a69183-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 00:30:07 crc kubenswrapper[4697]: I0126 00:30:07.613160 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" event={"ID":"3540d408-6fff-4d69-b439-b1a670a69183","Type":"ContainerDied","Data":"580542f7ed63c6dc0b50ef89d9aa2c6bfb6a5c7040d3c53d881e5ec72d6976ab"} Jan 26 00:30:07 crc kubenswrapper[4697]: I0126 00:30:07.613199 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="580542f7ed63c6dc0b50ef89d9aa2c6bfb6a5c7040d3c53d881e5ec72d6976ab" Jan 26 00:30:07 crc kubenswrapper[4697]: I0126 00:30:07.613213 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29489790-cfqvg" Jan 26 00:30:36 crc kubenswrapper[4697]: I0126 00:30:36.328932 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:30:36 crc kubenswrapper[4697]: I0126 00:30:36.329596 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:31:05 crc kubenswrapper[4697]: I0126 00:31:05.967442 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_d11bce09-6d09-4433-b564-a88d5cc34f35/docker-build/0.log" Jan 26 00:31:05 crc kubenswrapper[4697]: I0126 00:31:05.968755 4697 generic.go:334] "Generic (PLEG): container finished" podID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerID="202c1f40e4caf137f939275817f06f60c300fcec50d6e54eecfbcaeeecff2d1e" exitCode=1 Jan 26 00:31:05 crc kubenswrapper[4697]: I0126 00:31:05.968795 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"d11bce09-6d09-4433-b564-a88d5cc34f35","Type":"ContainerDied","Data":"202c1f40e4caf137f939275817f06f60c300fcec50d6e54eecfbcaeeecff2d1e"} Jan 26 00:31:06 crc kubenswrapper[4697]: I0126 00:31:06.328822 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:06 crc kubenswrapper[4697]: I0126 00:31:06.328893 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.253842 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_d11bce09-6d09-4433-b564-a88d5cc34f35/docker-build/0.log" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.255032 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376180 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-root\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376244 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-system-configs\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376297 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-build-blob-cache\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376332 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-buildworkdir\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376360 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-run\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376384 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-pull\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376418 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-ca-bundles\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376503 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9jn\" (UniqueName: \"kubernetes.io/projected/d11bce09-6d09-4433-b564-a88d5cc34f35-kube-api-access-qs9jn\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376544 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-proxy-ca-bundles\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376564 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-buildcachedir\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376590 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-node-pullsecrets\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.376610 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-push\") pod \"d11bce09-6d09-4433-b564-a88d5cc34f35\" (UID: \"d11bce09-6d09-4433-b564-a88d5cc34f35\") " Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.377123 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.377134 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.377898 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.377971 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.378015 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.378386 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.383213 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-push" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-push") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "builder-dockercfg-pfrjl-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.383246 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11bce09-6d09-4433-b564-a88d5cc34f35-kube-api-access-qs9jn" (OuterVolumeSpecName: "kube-api-access-qs9jn") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "kube-api-access-qs9jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.389201 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-pull" (OuterVolumeSpecName: "builder-dockercfg-pfrjl-pull") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "builder-dockercfg-pfrjl-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.414945 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478237 4697 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478279 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478292 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-pull\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-pull\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478301 4697 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478311 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9jn\" (UniqueName: \"kubernetes.io/projected/d11bce09-6d09-4433-b564-a88d5cc34f35-kube-api-access-qs9jn\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478319 4697 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478329 4697 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478337 4697 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d11bce09-6d09-4433-b564-a88d5cc34f35-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478346 4697 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-pfrjl-push\" (UniqueName: \"kubernetes.io/secret/d11bce09-6d09-4433-b564-a88d5cc34f35-builder-dockercfg-pfrjl-push\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.478354 4697 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/d11bce09-6d09-4433-b564-a88d5cc34f35-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.548148 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.579201 4697 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.985229 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_d11bce09-6d09-4433-b564-a88d5cc34f35/docker-build/0.log" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.986216 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"d11bce09-6d09-4433-b564-a88d5cc34f35","Type":"ContainerDied","Data":"9f6beda9361b1abc8efe157c59804dff0ee702f72b452a9c4002e8ff7d9358f4"} Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.986260 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f6beda9361b1abc8efe157c59804dff0ee702f72b452a9c4002e8ff7d9358f4" Jan 26 00:31:07 crc kubenswrapper[4697]: I0126 00:31:07.986300 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Jan 26 00:31:09 crc kubenswrapper[4697]: I0126 00:31:09.235657 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "d11bce09-6d09-4433-b564-a88d5cc34f35" (UID: "d11bce09-6d09-4433-b564-a88d5cc34f35"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:31:09 crc kubenswrapper[4697]: I0126 00:31:09.303174 4697 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/d11bce09-6d09-4433-b564-a88d5cc34f35-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 26 00:31:36 crc kubenswrapper[4697]: I0126 00:31:36.329093 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:31:36 crc kubenswrapper[4697]: I0126 00:31:36.329637 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:31:36 crc kubenswrapper[4697]: I0126 00:31:36.329688 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:31:36 crc kubenswrapper[4697]: I0126 00:31:36.330257 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9651ce93801261f4a6086ed411567351d504713cd692eafdc7b113cf23c118d"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:31:36 crc kubenswrapper[4697]: I0126 00:31:36.330306 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://e9651ce93801261f4a6086ed411567351d504713cd692eafdc7b113cf23c118d" gracePeriod=600 Jan 26 00:31:37 crc kubenswrapper[4697]: I0126 00:31:37.194191 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="e9651ce93801261f4a6086ed411567351d504713cd692eafdc7b113cf23c118d" exitCode=0 Jan 26 00:31:37 crc kubenswrapper[4697]: I0126 00:31:37.194253 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"e9651ce93801261f4a6086ed411567351d504713cd692eafdc7b113cf23c118d"} Jan 26 00:31:37 crc kubenswrapper[4697]: I0126 00:31:37.194715 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerStarted","Data":"18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b"} Jan 26 00:31:37 crc kubenswrapper[4697]: I0126 00:31:37.194739 4697 scope.go:117] "RemoveContainer" containerID="9ca5196ed53511183c7e6bd35f2334806b8a3c6b9ff7736fb9da29c9d2b42d54" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.701830 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-48zgd/must-gather-5tn4d"] Jan 26 00:31:44 crc kubenswrapper[4697]: E0126 00:31:44.702590 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3540d408-6fff-4d69-b439-b1a670a69183" containerName="collect-profiles" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.702604 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="3540d408-6fff-4d69-b439-b1a670a69183" containerName="collect-profiles" Jan 26 00:31:44 crc kubenswrapper[4697]: E0126 00:31:44.702616 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerName="manage-dockerfile" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.702623 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerName="manage-dockerfile" Jan 26 00:31:44 crc kubenswrapper[4697]: E0126 00:31:44.702637 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerName="docker-build" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.702651 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerName="docker-build" Jan 26 00:31:44 crc kubenswrapper[4697]: E0126 00:31:44.702672 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerName="git-clone" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.702679 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerName="git-clone" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.702794 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11bce09-6d09-4433-b564-a88d5cc34f35" containerName="docker-build" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.702806 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="3540d408-6fff-4d69-b439-b1a670a69183" containerName="collect-profiles" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.703551 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.715732 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-48zgd"/"kube-root-ca.crt" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.716223 4697 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-48zgd"/"default-dockercfg-97n2d" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.716487 4697 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-48zgd"/"openshift-service-ca.crt" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.723012 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-48zgd/must-gather-5tn4d"] Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.773523 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkqd\" (UniqueName: \"kubernetes.io/projected/ad326010-c6dc-47ef-b683-2198123656db-kube-api-access-5lkqd\") pod \"must-gather-5tn4d\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.773693 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad326010-c6dc-47ef-b683-2198123656db-must-gather-output\") pod \"must-gather-5tn4d\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.874960 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkqd\" (UniqueName: \"kubernetes.io/projected/ad326010-c6dc-47ef-b683-2198123656db-kube-api-access-5lkqd\") pod \"must-gather-5tn4d\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.875105 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad326010-c6dc-47ef-b683-2198123656db-must-gather-output\") pod \"must-gather-5tn4d\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.875672 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad326010-c6dc-47ef-b683-2198123656db-must-gather-output\") pod \"must-gather-5tn4d\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:44 crc kubenswrapper[4697]: I0126 00:31:44.913952 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkqd\" (UniqueName: \"kubernetes.io/projected/ad326010-c6dc-47ef-b683-2198123656db-kube-api-access-5lkqd\") pod \"must-gather-5tn4d\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:45 crc kubenswrapper[4697]: I0126 00:31:45.020284 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:31:45 crc kubenswrapper[4697]: I0126 00:31:45.216099 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-48zgd/must-gather-5tn4d"] Jan 26 00:31:45 crc kubenswrapper[4697]: I0126 00:31:45.227051 4697 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 00:31:45 crc kubenswrapper[4697]: I0126 00:31:45.249330 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-48zgd/must-gather-5tn4d" event={"ID":"ad326010-c6dc-47ef-b683-2198123656db","Type":"ContainerStarted","Data":"1416d82bbd0d5f72b974407a757462a13e823b59d2f0c3cd6126e912749fa59b"} Jan 26 00:31:52 crc kubenswrapper[4697]: I0126 00:31:52.312365 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-48zgd/must-gather-5tn4d" event={"ID":"ad326010-c6dc-47ef-b683-2198123656db","Type":"ContainerStarted","Data":"a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d"} Jan 26 00:31:52 crc kubenswrapper[4697]: I0126 00:31:52.312877 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-48zgd/must-gather-5tn4d" event={"ID":"ad326010-c6dc-47ef-b683-2198123656db","Type":"ContainerStarted","Data":"3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6"} Jan 26 00:31:52 crc kubenswrapper[4697]: I0126 00:31:52.333512 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-48zgd/must-gather-5tn4d" podStartSLOduration=2.004186782 podStartE2EDuration="8.333493708s" podCreationTimestamp="2026-01-26 00:31:44 +0000 UTC" firstStartedPulling="2026-01-26 00:31:45.226692218 +0000 UTC m=+1446.863469608" lastFinishedPulling="2026-01-26 00:31:51.555999144 +0000 UTC m=+1453.192776534" observedRunningTime="2026-01-26 00:31:52.326771888 +0000 UTC m=+1453.963549278" watchObservedRunningTime="2026-01-26 00:31:52.333493708 +0000 UTC m=+1453.970271098" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.691737 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpstb"] Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.693448 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.706853 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpstb"] Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.824638 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xds6s\" (UniqueName: \"kubernetes.io/projected/05cb86ba-11cb-4c8f-9da6-c536d2983d95-kube-api-access-xds6s\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.824694 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-catalog-content\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.824742 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-utilities\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.926296 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xds6s\" (UniqueName: \"kubernetes.io/projected/05cb86ba-11cb-4c8f-9da6-c536d2983d95-kube-api-access-xds6s\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.926614 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-catalog-content\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.926632 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-utilities\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.927218 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-utilities\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.927319 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-catalog-content\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:09 crc kubenswrapper[4697]: I0126 00:32:09.951136 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xds6s\" (UniqueName: \"kubernetes.io/projected/05cb86ba-11cb-4c8f-9da6-c536d2983d95-kube-api-access-xds6s\") pod \"certified-operators-mpstb\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:10 crc kubenswrapper[4697]: I0126 00:32:10.017517 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:10 crc kubenswrapper[4697]: I0126 00:32:10.506801 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpstb"] Jan 26 00:32:10 crc kubenswrapper[4697]: W0126 00:32:10.516589 4697 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05cb86ba_11cb_4c8f_9da6_c536d2983d95.slice/crio-791acbb16459c9c4b6a320fd2e3286dde32b065e45e88c64a3bae6e612d47174 WatchSource:0}: Error finding container 791acbb16459c9c4b6a320fd2e3286dde32b065e45e88c64a3bae6e612d47174: Status 404 returned error can't find the container with id 791acbb16459c9c4b6a320fd2e3286dde32b065e45e88c64a3bae6e612d47174 Jan 26 00:32:11 crc kubenswrapper[4697]: I0126 00:32:11.417863 4697 generic.go:334] "Generic (PLEG): container finished" podID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerID="09a1089294bc336df2598d4f016d957bb1c9aa6da998aa296f354a5d1283ba45" exitCode=0 Jan 26 00:32:11 crc kubenswrapper[4697]: I0126 00:32:11.417965 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpstb" event={"ID":"05cb86ba-11cb-4c8f-9da6-c536d2983d95","Type":"ContainerDied","Data":"09a1089294bc336df2598d4f016d957bb1c9aa6da998aa296f354a5d1283ba45"} Jan 26 00:32:11 crc kubenswrapper[4697]: I0126 00:32:11.418425 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpstb" event={"ID":"05cb86ba-11cb-4c8f-9da6-c536d2983d95","Type":"ContainerStarted","Data":"791acbb16459c9c4b6a320fd2e3286dde32b065e45e88c64a3bae6e612d47174"} Jan 26 00:32:12 crc kubenswrapper[4697]: I0126 00:32:12.430970 4697 generic.go:334] "Generic (PLEG): container finished" podID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerID="5fc0a9bba8f04adf3180a4092e883a845ee168968603c708c3cf88574b777c02" exitCode=0 Jan 26 00:32:12 crc kubenswrapper[4697]: I0126 00:32:12.431022 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpstb" event={"ID":"05cb86ba-11cb-4c8f-9da6-c536d2983d95","Type":"ContainerDied","Data":"5fc0a9bba8f04adf3180a4092e883a845ee168968603c708c3cf88574b777c02"} Jan 26 00:32:13 crc kubenswrapper[4697]: I0126 00:32:13.443622 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpstb" event={"ID":"05cb86ba-11cb-4c8f-9da6-c536d2983d95","Type":"ContainerStarted","Data":"0318788a9773520242466cbe4f3f890d778cead65549ec9b4b4b0e1f85ee6d3f"} Jan 26 00:32:13 crc kubenswrapper[4697]: I0126 00:32:13.467367 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpstb" podStartSLOduration=2.892554771 podStartE2EDuration="4.467343121s" podCreationTimestamp="2026-01-26 00:32:09 +0000 UTC" firstStartedPulling="2026-01-26 00:32:11.419454687 +0000 UTC m=+1473.056232077" lastFinishedPulling="2026-01-26 00:32:12.994243037 +0000 UTC m=+1474.631020427" observedRunningTime="2026-01-26 00:32:13.465253472 +0000 UTC m=+1475.102030872" watchObservedRunningTime="2026-01-26 00:32:13.467343121 +0000 UTC m=+1475.104120511" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.206885 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h9ldl"] Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.208505 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.217524 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9ldl"] Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.317689 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nhz\" (UniqueName: \"kubernetes.io/projected/c34eb7be-2ca8-429f-bea5-5e0a89795033-kube-api-access-86nhz\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.317757 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-catalog-content\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.318110 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-utilities\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.420122 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-utilities\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.420215 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nhz\" (UniqueName: \"kubernetes.io/projected/c34eb7be-2ca8-429f-bea5-5e0a89795033-kube-api-access-86nhz\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.420279 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-catalog-content\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.420694 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-catalog-content\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.420694 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-utilities\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.448347 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nhz\" (UniqueName: \"kubernetes.io/projected/c34eb7be-2ca8-429f-bea5-5e0a89795033-kube-api-access-86nhz\") pod \"redhat-operators-h9ldl\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.571695 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:17 crc kubenswrapper[4697]: I0126 00:32:17.823723 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9ldl"] Jan 26 00:32:18 crc kubenswrapper[4697]: I0126 00:32:18.471771 4697 generic.go:334] "Generic (PLEG): container finished" podID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerID="55433eaf84e010ab0495c28ea9a45225964c5f29599bcac347e9df4987f467a4" exitCode=0 Jan 26 00:32:18 crc kubenswrapper[4697]: I0126 00:32:18.471883 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9ldl" event={"ID":"c34eb7be-2ca8-429f-bea5-5e0a89795033","Type":"ContainerDied","Data":"55433eaf84e010ab0495c28ea9a45225964c5f29599bcac347e9df4987f467a4"} Jan 26 00:32:18 crc kubenswrapper[4697]: I0126 00:32:18.472108 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9ldl" event={"ID":"c34eb7be-2ca8-429f-bea5-5e0a89795033","Type":"ContainerStarted","Data":"2c36c6a40165d9cfb38c7c6019879a655fb9c7f6c73387fe937611a0eebb0110"} Jan 26 00:32:19 crc kubenswrapper[4697]: I0126 00:32:19.481562 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9ldl" event={"ID":"c34eb7be-2ca8-429f-bea5-5e0a89795033","Type":"ContainerStarted","Data":"4d2411748ce83613baedd421936351d28ad3fdb7082e2faf6763bf76ce871c6f"} Jan 26 00:32:20 crc kubenswrapper[4697]: I0126 00:32:20.018591 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:20 crc kubenswrapper[4697]: I0126 00:32:20.019114 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:20 crc kubenswrapper[4697]: I0126 00:32:20.055492 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:20 crc kubenswrapper[4697]: I0126 00:32:20.489375 4697 generic.go:334] "Generic (PLEG): container finished" podID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerID="4d2411748ce83613baedd421936351d28ad3fdb7082e2faf6763bf76ce871c6f" exitCode=0 Jan 26 00:32:20 crc kubenswrapper[4697]: I0126 00:32:20.489413 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9ldl" event={"ID":"c34eb7be-2ca8-429f-bea5-5e0a89795033","Type":"ContainerDied","Data":"4d2411748ce83613baedd421936351d28ad3fdb7082e2faf6763bf76ce871c6f"} Jan 26 00:32:20 crc kubenswrapper[4697]: I0126 00:32:20.541345 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:21 crc kubenswrapper[4697]: I0126 00:32:21.497562 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9ldl" event={"ID":"c34eb7be-2ca8-429f-bea5-5e0a89795033","Type":"ContainerStarted","Data":"6c6854ea4e9e569f090116059b555f179f70cd804dead79efd9b31e7147cd135"} Jan 26 00:32:21 crc kubenswrapper[4697]: I0126 00:32:21.516425 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h9ldl" podStartSLOduration=1.948313255 podStartE2EDuration="4.516406138s" podCreationTimestamp="2026-01-26 00:32:17 +0000 UTC" firstStartedPulling="2026-01-26 00:32:18.473698608 +0000 UTC m=+1480.110475998" lastFinishedPulling="2026-01-26 00:32:21.041791491 +0000 UTC m=+1482.678568881" observedRunningTime="2026-01-26 00:32:21.515199574 +0000 UTC m=+1483.151976964" watchObservedRunningTime="2026-01-26 00:32:21.516406138 +0000 UTC m=+1483.153183528" Jan 26 00:32:22 crc kubenswrapper[4697]: I0126 00:32:22.392419 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpstb"] Jan 26 00:32:23 crc kubenswrapper[4697]: I0126 00:32:23.510781 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mpstb" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="registry-server" containerID="cri-o://0318788a9773520242466cbe4f3f890d778cead65549ec9b4b4b0e1f85ee6d3f" gracePeriod=2 Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.524791 4697 generic.go:334] "Generic (PLEG): container finished" podID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerID="0318788a9773520242466cbe4f3f890d778cead65549ec9b4b4b0e1f85ee6d3f" exitCode=0 Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.525051 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpstb" event={"ID":"05cb86ba-11cb-4c8f-9da6-c536d2983d95","Type":"ContainerDied","Data":"0318788a9773520242466cbe4f3f890d778cead65549ec9b4b4b0e1f85ee6d3f"} Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.525142 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpstb" event={"ID":"05cb86ba-11cb-4c8f-9da6-c536d2983d95","Type":"ContainerDied","Data":"791acbb16459c9c4b6a320fd2e3286dde32b065e45e88c64a3bae6e612d47174"} Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.525159 4697 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791acbb16459c9c4b6a320fd2e3286dde32b065e45e88c64a3bae6e612d47174" Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.556485 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.715275 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xds6s\" (UniqueName: \"kubernetes.io/projected/05cb86ba-11cb-4c8f-9da6-c536d2983d95-kube-api-access-xds6s\") pod \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.715360 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-utilities\") pod \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.715466 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-catalog-content\") pod \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\" (UID: \"05cb86ba-11cb-4c8f-9da6-c536d2983d95\") " Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.716321 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-utilities" (OuterVolumeSpecName: "utilities") pod "05cb86ba-11cb-4c8f-9da6-c536d2983d95" (UID: "05cb86ba-11cb-4c8f-9da6-c536d2983d95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.720663 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05cb86ba-11cb-4c8f-9da6-c536d2983d95-kube-api-access-xds6s" (OuterVolumeSpecName: "kube-api-access-xds6s") pod "05cb86ba-11cb-4c8f-9da6-c536d2983d95" (UID: "05cb86ba-11cb-4c8f-9da6-c536d2983d95"). InnerVolumeSpecName "kube-api-access-xds6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.768468 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05cb86ba-11cb-4c8f-9da6-c536d2983d95" (UID: "05cb86ba-11cb-4c8f-9da6-c536d2983d95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.816779 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.816823 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xds6s\" (UniqueName: \"kubernetes.io/projected/05cb86ba-11cb-4c8f-9da6-c536d2983d95-kube-api-access-xds6s\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:24 crc kubenswrapper[4697]: I0126 00:32:24.816837 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05cb86ba-11cb-4c8f-9da6-c536d2983d95-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:25 crc kubenswrapper[4697]: I0126 00:32:25.539111 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpstb" Jan 26 00:32:25 crc kubenswrapper[4697]: I0126 00:32:25.581270 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpstb"] Jan 26 00:32:25 crc kubenswrapper[4697]: I0126 00:32:25.586535 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mpstb"] Jan 26 00:32:26 crc kubenswrapper[4697]: I0126 00:32:26.668556 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" path="/var/lib/kubelet/pods/05cb86ba-11cb-4c8f-9da6-c536d2983d95/volumes" Jan 26 00:32:27 crc kubenswrapper[4697]: I0126 00:32:27.571868 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:27 crc kubenswrapper[4697]: I0126 00:32:27.571926 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:27 crc kubenswrapper[4697]: I0126 00:32:27.615839 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:28 crc kubenswrapper[4697]: I0126 00:32:28.599254 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:29 crc kubenswrapper[4697]: I0126 00:32:29.015711 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9ldl"] Jan 26 00:32:30 crc kubenswrapper[4697]: I0126 00:32:30.569240 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h9ldl" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="registry-server" containerID="cri-o://6c6854ea4e9e569f090116059b555f179f70cd804dead79efd9b31e7147cd135" gracePeriod=2 Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.599732 4697 generic.go:334] "Generic (PLEG): container finished" podID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerID="6c6854ea4e9e569f090116059b555f179f70cd804dead79efd9b31e7147cd135" exitCode=0 Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.599812 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9ldl" event={"ID":"c34eb7be-2ca8-429f-bea5-5e0a89795033","Type":"ContainerDied","Data":"6c6854ea4e9e569f090116059b555f179f70cd804dead79efd9b31e7147cd135"} Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.773766 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.800237 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-catalog-content\") pod \"c34eb7be-2ca8-429f-bea5-5e0a89795033\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.800332 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-utilities\") pod \"c34eb7be-2ca8-429f-bea5-5e0a89795033\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.800401 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86nhz\" (UniqueName: \"kubernetes.io/projected/c34eb7be-2ca8-429f-bea5-5e0a89795033-kube-api-access-86nhz\") pod \"c34eb7be-2ca8-429f-bea5-5e0a89795033\" (UID: \"c34eb7be-2ca8-429f-bea5-5e0a89795033\") " Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.801403 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-utilities" (OuterVolumeSpecName: "utilities") pod "c34eb7be-2ca8-429f-bea5-5e0a89795033" (UID: "c34eb7be-2ca8-429f-bea5-5e0a89795033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.805378 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34eb7be-2ca8-429f-bea5-5e0a89795033-kube-api-access-86nhz" (OuterVolumeSpecName: "kube-api-access-86nhz") pod "c34eb7be-2ca8-429f-bea5-5e0a89795033" (UID: "c34eb7be-2ca8-429f-bea5-5e0a89795033"). InnerVolumeSpecName "kube-api-access-86nhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.901240 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86nhz\" (UniqueName: \"kubernetes.io/projected/c34eb7be-2ca8-429f-bea5-5e0a89795033-kube-api-access-86nhz\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.901281 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:33 crc kubenswrapper[4697]: I0126 00:32:33.920253 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c34eb7be-2ca8-429f-bea5-5e0a89795033" (UID: "c34eb7be-2ca8-429f-bea5-5e0a89795033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.002642 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c34eb7be-2ca8-429f-bea5-5e0a89795033-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.608879 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9ldl" event={"ID":"c34eb7be-2ca8-429f-bea5-5e0a89795033","Type":"ContainerDied","Data":"2c36c6a40165d9cfb38c7c6019879a655fb9c7f6c73387fe937611a0eebb0110"} Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.609301 4697 scope.go:117] "RemoveContainer" containerID="6c6854ea4e9e569f090116059b555f179f70cd804dead79efd9b31e7147cd135" Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.608997 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9ldl" Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.634305 4697 scope.go:117] "RemoveContainer" containerID="4d2411748ce83613baedd421936351d28ad3fdb7082e2faf6763bf76ce871c6f" Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.642705 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9ldl"] Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.676153 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h9ldl"] Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.684814 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" path="/var/lib/kubelet/pods/c34eb7be-2ca8-429f-bea5-5e0a89795033/volumes" Jan 26 00:32:34 crc kubenswrapper[4697]: I0126 00:32:34.687522 4697 scope.go:117] "RemoveContainer" containerID="55433eaf84e010ab0495c28ea9a45225964c5f29599bcac347e9df4987f467a4" Jan 26 00:32:35 crc kubenswrapper[4697]: I0126 00:32:35.906266 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5n7g4_1d00cd23-9f79-4b17-9c20-006b33bf7b9e/control-plane-machine-set-operator/0.log" Jan 26 00:32:36 crc kubenswrapper[4697]: I0126 00:32:36.049902 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xr4t2_e548e601-d7aa-4a67-9a9b-14dd195fcd9e/kube-rbac-proxy/0.log" Jan 26 00:32:36 crc kubenswrapper[4697]: I0126 00:32:36.051183 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xr4t2_e548e601-d7aa-4a67-9a9b-14dd195fcd9e/machine-api-operator/0.log" Jan 26 00:32:47 crc kubenswrapper[4697]: I0126 00:32:47.794020 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-dsc22_8ae2b5b8-6dfa-4fd4-8381-20796598c0a6/cert-manager-controller/0.log" Jan 26 00:32:47 crc kubenswrapper[4697]: I0126 00:32:47.917216 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-5hw4v_6c6ee241-91b4-4953-a69c-ce370643f47c/cert-manager-cainjector/0.log" Jan 26 00:32:47 crc kubenswrapper[4697]: I0126 00:32:47.961386 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-7jlqj_010dfa20-a3e3-4d17-83e2-be8dabc0f8cc/cert-manager-webhook/0.log" Jan 26 00:33:02 crc kubenswrapper[4697]: I0126 00:33:02.161928 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cfj6x_a0c71d7a-0767-481f-9f8d-e888252ed0f3/prometheus-operator/0.log" Jan 26 00:33:02 crc kubenswrapper[4697]: I0126 00:33:02.298050 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z_1c53bce5-4c78-4410-94da-1feadaf217a6/prometheus-operator-admission-webhook/0.log" Jan 26 00:33:02 crc kubenswrapper[4697]: I0126 00:33:02.333249 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf_44233f32-bd83-47a6-bcee-47c8b02e5e0b/prometheus-operator-admission-webhook/0.log" Jan 26 00:33:02 crc kubenswrapper[4697]: I0126 00:33:02.477877 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wvrz9_d715f6c3-3dad-4e23-99a7-fed27f169907/operator/0.log" Jan 26 00:33:02 crc kubenswrapper[4697]: I0126 00:33:02.547752 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-xf6fx_ac806297-3fe1-4e19-8a22-d98dd2bfbbfd/perses-operator/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.226528 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_f4412bc5-df84-4f33-9640-a98bc9e0f9cc/util/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.407529 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_f4412bc5-df84-4f33-9640-a98bc9e0f9cc/pull/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.427951 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_f4412bc5-df84-4f33-9640-a98bc9e0f9cc/util/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.466799 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_f4412bc5-df84-4f33-9640-a98bc9e0f9cc/pull/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.678863 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_f4412bc5-df84-4f33-9640-a98bc9e0f9cc/extract/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.694320 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_f4412bc5-df84-4f33-9640-a98bc9e0f9cc/util/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.719807 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931akvsj8_f4412bc5-df84-4f33-9640-a98bc9e0f9cc/pull/0.log" Jan 26 00:33:17 crc kubenswrapper[4697]: I0126 00:33:17.858674 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk_4c2b0caf-95fc-4900-b54d-365c27b99671/util/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.030175 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk_4c2b0caf-95fc-4900-b54d-365c27b99671/util/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.195855 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk_4c2b0caf-95fc-4900-b54d-365c27b99671/pull/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.203787 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk_4c2b0caf-95fc-4900-b54d-365c27b99671/pull/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.351113 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk_4c2b0caf-95fc-4900-b54d-365c27b99671/util/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.351596 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk_4c2b0caf-95fc-4900-b54d-365c27b99671/pull/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.414502 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftq6mk_4c2b0caf-95fc-4900-b54d-365c27b99671/extract/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.724877 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k_b29c2002-ed3e-4018-84b9-c9760d243cb7/util/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.932403 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k_b29c2002-ed3e-4018-84b9-c9760d243cb7/pull/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.940359 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k_b29c2002-ed3e-4018-84b9-c9760d243cb7/util/0.log" Jan 26 00:33:18 crc kubenswrapper[4697]: I0126 00:33:18.955651 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k_b29c2002-ed3e-4018-84b9-c9760d243cb7/pull/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.130205 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k_b29c2002-ed3e-4018-84b9-c9760d243cb7/util/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.177094 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k_b29c2002-ed3e-4018-84b9-c9760d243cb7/extract/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.177233 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ects6k_b29c2002-ed3e-4018-84b9-c9760d243cb7/pull/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.363353 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6_aca9920c-df77-483a-a8ca-3bba0549b6cb/util/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.605976 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6_aca9920c-df77-483a-a8ca-3bba0549b6cb/pull/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.621625 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6_aca9920c-df77-483a-a8ca-3bba0549b6cb/pull/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.667934 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6_aca9920c-df77-483a-a8ca-3bba0549b6cb/util/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.850188 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6_aca9920c-df77-483a-a8ca-3bba0549b6cb/pull/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.903567 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6_aca9920c-df77-483a-a8ca-3bba0549b6cb/util/0.log" Jan 26 00:33:19 crc kubenswrapper[4697]: I0126 00:33:19.913272 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f084vss6_aca9920c-df77-483a-a8ca-3bba0549b6cb/extract/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.095272 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bthp6_f069b1fd-5a99-4f17-bf0b-aa757f46a13a/extract-utilities/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.271940 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bthp6_f069b1fd-5a99-4f17-bf0b-aa757f46a13a/extract-utilities/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.286856 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bthp6_f069b1fd-5a99-4f17-bf0b-aa757f46a13a/extract-content/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.302831 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bthp6_f069b1fd-5a99-4f17-bf0b-aa757f46a13a/extract-content/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.517311 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bthp6_f069b1fd-5a99-4f17-bf0b-aa757f46a13a/extract-utilities/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.521988 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bthp6_f069b1fd-5a99-4f17-bf0b-aa757f46a13a/extract-content/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.706792 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qgbj_79e94cda-195b-4740-ae0d-fcdc027823b1/extract-utilities/0.log" Jan 26 00:33:20 crc kubenswrapper[4697]: I0126 00:33:20.816668 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bthp6_f069b1fd-5a99-4f17-bf0b-aa757f46a13a/registry-server/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.011984 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qgbj_79e94cda-195b-4740-ae0d-fcdc027823b1/extract-utilities/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.053211 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qgbj_79e94cda-195b-4740-ae0d-fcdc027823b1/extract-content/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.053828 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qgbj_79e94cda-195b-4740-ae0d-fcdc027823b1/extract-content/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.259665 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qgbj_79e94cda-195b-4740-ae0d-fcdc027823b1/extract-utilities/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.267576 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qgbj_79e94cda-195b-4740-ae0d-fcdc027823b1/extract-content/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.312349 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-t6xt4_4b52e277-1275-4d65-8d52-5dbdec0fd0cd/marketplace-operator/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.478974 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6qgbj_79e94cda-195b-4740-ae0d-fcdc027823b1/registry-server/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.517931 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6jgjm_e85e9cd3-9fce-43a7-9abc-a7883cd21c5c/extract-utilities/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.661990 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6jgjm_e85e9cd3-9fce-43a7-9abc-a7883cd21c5c/extract-utilities/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.680745 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6jgjm_e85e9cd3-9fce-43a7-9abc-a7883cd21c5c/extract-content/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.710567 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6jgjm_e85e9cd3-9fce-43a7-9abc-a7883cd21c5c/extract-content/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.864664 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6jgjm_e85e9cd3-9fce-43a7-9abc-a7883cd21c5c/extract-content/0.log" Jan 26 00:33:21 crc kubenswrapper[4697]: I0126 00:33:21.905250 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6jgjm_e85e9cd3-9fce-43a7-9abc-a7883cd21c5c/extract-utilities/0.log" Jan 26 00:33:22 crc kubenswrapper[4697]: I0126 00:33:22.070495 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6jgjm_e85e9cd3-9fce-43a7-9abc-a7883cd21c5c/registry-server/0.log" Jan 26 00:33:34 crc kubenswrapper[4697]: I0126 00:33:34.448537 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7bdfcbcc77-pxg7z_1c53bce5-4c78-4410-94da-1feadaf217a6/prometheus-operator-admission-webhook/0.log" Jan 26 00:33:34 crc kubenswrapper[4697]: I0126 00:33:34.451105 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7bdfcbcc77-v27gf_44233f32-bd83-47a6-bcee-47c8b02e5e0b/prometheus-operator-admission-webhook/0.log" Jan 26 00:33:34 crc kubenswrapper[4697]: I0126 00:33:34.451137 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-cfj6x_a0c71d7a-0767-481f-9f8d-e888252ed0f3/prometheus-operator/0.log" Jan 26 00:33:34 crc kubenswrapper[4697]: I0126 00:33:34.589413 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-wvrz9_d715f6c3-3dad-4e23-99a7-fed27f169907/operator/0.log" Jan 26 00:33:34 crc kubenswrapper[4697]: I0126 00:33:34.636466 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-xf6fx_ac806297-3fe1-4e19-8a22-d98dd2bfbbfd/perses-operator/0.log" Jan 26 00:33:36 crc kubenswrapper[4697]: I0126 00:33:36.328435 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:33:36 crc kubenswrapper[4697]: I0126 00:33:36.328751 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:34:06 crc kubenswrapper[4697]: I0126 00:34:06.328902 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:34:06 crc kubenswrapper[4697]: I0126 00:34:06.329522 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.126571 4697 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ntjvk"] Jan 26 00:34:10 crc kubenswrapper[4697]: E0126 00:34:10.127246 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="extract-content" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127263 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="extract-content" Jan 26 00:34:10 crc kubenswrapper[4697]: E0126 00:34:10.127280 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="registry-server" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127287 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="registry-server" Jan 26 00:34:10 crc kubenswrapper[4697]: E0126 00:34:10.127301 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="extract-utilities" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127309 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="extract-utilities" Jan 26 00:34:10 crc kubenswrapper[4697]: E0126 00:34:10.127319 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="registry-server" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127326 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="registry-server" Jan 26 00:34:10 crc kubenswrapper[4697]: E0126 00:34:10.127341 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="extract-utilities" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127348 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="extract-utilities" Jan 26 00:34:10 crc kubenswrapper[4697]: E0126 00:34:10.127366 4697 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="extract-content" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127373 4697 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="extract-content" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127526 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="05cb86ba-11cb-4c8f-9da6-c536d2983d95" containerName="registry-server" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.127546 4697 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34eb7be-2ca8-429f-bea5-5e0a89795033" containerName="registry-server" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.128617 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.168776 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ntjvk"] Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.229114 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-catalog-content\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.229211 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j22f2\" (UniqueName: \"kubernetes.io/projected/f3505fca-52c3-4c67-8c78-21f8900978fb-kube-api-access-j22f2\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.229274 4697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-utilities\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.330150 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-catalog-content\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.330208 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j22f2\" (UniqueName: \"kubernetes.io/projected/f3505fca-52c3-4c67-8c78-21f8900978fb-kube-api-access-j22f2\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.330245 4697 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-utilities\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.330772 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-catalog-content\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.330813 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-utilities\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.349917 4697 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j22f2\" (UniqueName: \"kubernetes.io/projected/f3505fca-52c3-4c67-8c78-21f8900978fb-kube-api-access-j22f2\") pod \"community-operators-ntjvk\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.458412 4697 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:10 crc kubenswrapper[4697]: I0126 00:34:10.724980 4697 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ntjvk"] Jan 26 00:34:11 crc kubenswrapper[4697]: I0126 00:34:11.206836 4697 generic.go:334] "Generic (PLEG): container finished" podID="f3505fca-52c3-4c67-8c78-21f8900978fb" containerID="8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173" exitCode=0 Jan 26 00:34:11 crc kubenswrapper[4697]: I0126 00:34:11.207300 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntjvk" event={"ID":"f3505fca-52c3-4c67-8c78-21f8900978fb","Type":"ContainerDied","Data":"8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173"} Jan 26 00:34:11 crc kubenswrapper[4697]: I0126 00:34:11.207346 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntjvk" event={"ID":"f3505fca-52c3-4c67-8c78-21f8900978fb","Type":"ContainerStarted","Data":"64a321a2a01c08534df80264e50cc541f8bd5c8ddf135c65e3d47594fb031777"} Jan 26 00:34:13 crc kubenswrapper[4697]: I0126 00:34:13.220632 4697 generic.go:334] "Generic (PLEG): container finished" podID="f3505fca-52c3-4c67-8c78-21f8900978fb" containerID="446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5" exitCode=0 Jan 26 00:34:13 crc kubenswrapper[4697]: I0126 00:34:13.220867 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntjvk" event={"ID":"f3505fca-52c3-4c67-8c78-21f8900978fb","Type":"ContainerDied","Data":"446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5"} Jan 26 00:34:14 crc kubenswrapper[4697]: I0126 00:34:14.230734 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntjvk" event={"ID":"f3505fca-52c3-4c67-8c78-21f8900978fb","Type":"ContainerStarted","Data":"2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008"} Jan 26 00:34:14 crc kubenswrapper[4697]: I0126 00:34:14.254703 4697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ntjvk" podStartSLOduration=1.71733372 podStartE2EDuration="4.25467638s" podCreationTimestamp="2026-01-26 00:34:10 +0000 UTC" firstStartedPulling="2026-01-26 00:34:11.21118736 +0000 UTC m=+1592.847964750" lastFinishedPulling="2026-01-26 00:34:13.74853002 +0000 UTC m=+1595.385307410" observedRunningTime="2026-01-26 00:34:14.250295473 +0000 UTC m=+1595.887072873" watchObservedRunningTime="2026-01-26 00:34:14.25467638 +0000 UTC m=+1595.891453770" Jan 26 00:34:20 crc kubenswrapper[4697]: I0126 00:34:20.459180 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:20 crc kubenswrapper[4697]: I0126 00:34:20.459691 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:20 crc kubenswrapper[4697]: I0126 00:34:20.498016 4697 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:21 crc kubenswrapper[4697]: I0126 00:34:21.334308 4697 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:21 crc kubenswrapper[4697]: I0126 00:34:21.389400 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ntjvk"] Jan 26 00:34:23 crc kubenswrapper[4697]: I0126 00:34:23.301635 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ntjvk" podUID="f3505fca-52c3-4c67-8c78-21f8900978fb" containerName="registry-server" containerID="cri-o://2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008" gracePeriod=2 Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.174379 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.310230 4697 generic.go:334] "Generic (PLEG): container finished" podID="f3505fca-52c3-4c67-8c78-21f8900978fb" containerID="2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008" exitCode=0 Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.310277 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntjvk" event={"ID":"f3505fca-52c3-4c67-8c78-21f8900978fb","Type":"ContainerDied","Data":"2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008"} Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.310305 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntjvk" event={"ID":"f3505fca-52c3-4c67-8c78-21f8900978fb","Type":"ContainerDied","Data":"64a321a2a01c08534df80264e50cc541f8bd5c8ddf135c65e3d47594fb031777"} Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.310322 4697 scope.go:117] "RemoveContainer" containerID="2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.310341 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntjvk" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.338875 4697 scope.go:117] "RemoveContainer" containerID="446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.347166 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-catalog-content\") pod \"f3505fca-52c3-4c67-8c78-21f8900978fb\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.347263 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j22f2\" (UniqueName: \"kubernetes.io/projected/f3505fca-52c3-4c67-8c78-21f8900978fb-kube-api-access-j22f2\") pod \"f3505fca-52c3-4c67-8c78-21f8900978fb\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.347398 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-utilities\") pod \"f3505fca-52c3-4c67-8c78-21f8900978fb\" (UID: \"f3505fca-52c3-4c67-8c78-21f8900978fb\") " Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.349923 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-utilities" (OuterVolumeSpecName: "utilities") pod "f3505fca-52c3-4c67-8c78-21f8900978fb" (UID: "f3505fca-52c3-4c67-8c78-21f8900978fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.353803 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3505fca-52c3-4c67-8c78-21f8900978fb-kube-api-access-j22f2" (OuterVolumeSpecName: "kube-api-access-j22f2") pod "f3505fca-52c3-4c67-8c78-21f8900978fb" (UID: "f3505fca-52c3-4c67-8c78-21f8900978fb"). InnerVolumeSpecName "kube-api-access-j22f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.371600 4697 scope.go:117] "RemoveContainer" containerID="8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.403106 4697 scope.go:117] "RemoveContainer" containerID="2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008" Jan 26 00:34:24 crc kubenswrapper[4697]: E0126 00:34:24.403837 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008\": container with ID starting with 2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008 not found: ID does not exist" containerID="2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.403896 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008"} err="failed to get container status \"2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008\": rpc error: code = NotFound desc = could not find container \"2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008\": container with ID starting with 2a6584b3c9190ca0197ee0317b8b88f10a49ba24c7d1e794f5fcf35fc3687008 not found: ID does not exist" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.403933 4697 scope.go:117] "RemoveContainer" containerID="446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5" Jan 26 00:34:24 crc kubenswrapper[4697]: E0126 00:34:24.404577 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5\": container with ID starting with 446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5 not found: ID does not exist" containerID="446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.404625 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5"} err="failed to get container status \"446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5\": rpc error: code = NotFound desc = could not find container \"446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5\": container with ID starting with 446438591b1419568e74349cc80bbc5b1235afff2cc638b1536022d1b463a5f5 not found: ID does not exist" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.404655 4697 scope.go:117] "RemoveContainer" containerID="8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173" Jan 26 00:34:24 crc kubenswrapper[4697]: E0126 00:34:24.405151 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173\": container with ID starting with 8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173 not found: ID does not exist" containerID="8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.405191 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173"} err="failed to get container status \"8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173\": rpc error: code = NotFound desc = could not find container \"8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173\": container with ID starting with 8ea61d30484e6584f7d1f80d2ca36f113f49c82277f6b240cf792f5844e89173 not found: ID does not exist" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.408419 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3505fca-52c3-4c67-8c78-21f8900978fb" (UID: "f3505fca-52c3-4c67-8c78-21f8900978fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.449933 4697 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.449989 4697 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3505fca-52c3-4c67-8c78-21f8900978fb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.450004 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j22f2\" (UniqueName: \"kubernetes.io/projected/f3505fca-52c3-4c67-8c78-21f8900978fb-kube-api-access-j22f2\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.657340 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ntjvk"] Jan 26 00:34:24 crc kubenswrapper[4697]: I0126 00:34:24.671429 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ntjvk"] Jan 26 00:34:26 crc kubenswrapper[4697]: I0126 00:34:26.672403 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3505fca-52c3-4c67-8c78-21f8900978fb" path="/var/lib/kubelet/pods/f3505fca-52c3-4c67-8c78-21f8900978fb/volumes" Jan 26 00:34:28 crc kubenswrapper[4697]: I0126 00:34:28.346989 4697 generic.go:334] "Generic (PLEG): container finished" podID="ad326010-c6dc-47ef-b683-2198123656db" containerID="3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6" exitCode=0 Jan 26 00:34:28 crc kubenswrapper[4697]: I0126 00:34:28.347124 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-48zgd/must-gather-5tn4d" event={"ID":"ad326010-c6dc-47ef-b683-2198123656db","Type":"ContainerDied","Data":"3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6"} Jan 26 00:34:28 crc kubenswrapper[4697]: I0126 00:34:28.348129 4697 scope.go:117] "RemoveContainer" containerID="3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6" Jan 26 00:34:28 crc kubenswrapper[4697]: I0126 00:34:28.871999 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-48zgd_must-gather-5tn4d_ad326010-c6dc-47ef-b683-2198123656db/gather/0.log" Jan 26 00:34:35 crc kubenswrapper[4697]: I0126 00:34:35.651992 4697 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-48zgd/must-gather-5tn4d"] Jan 26 00:34:35 crc kubenswrapper[4697]: I0126 00:34:35.652772 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-48zgd/must-gather-5tn4d" podUID="ad326010-c6dc-47ef-b683-2198123656db" containerName="copy" containerID="cri-o://a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d" gracePeriod=2 Jan 26 00:34:35 crc kubenswrapper[4697]: I0126 00:34:35.664357 4697 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-48zgd/must-gather-5tn4d"] Jan 26 00:34:35 crc kubenswrapper[4697]: I0126 00:34:35.966278 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-48zgd_must-gather-5tn4d_ad326010-c6dc-47ef-b683-2198123656db/copy/0.log" Jan 26 00:34:35 crc kubenswrapper[4697]: I0126 00:34:35.966876 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.103844 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lkqd\" (UniqueName: \"kubernetes.io/projected/ad326010-c6dc-47ef-b683-2198123656db-kube-api-access-5lkqd\") pod \"ad326010-c6dc-47ef-b683-2198123656db\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.103926 4697 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad326010-c6dc-47ef-b683-2198123656db-must-gather-output\") pod \"ad326010-c6dc-47ef-b683-2198123656db\" (UID: \"ad326010-c6dc-47ef-b683-2198123656db\") " Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.169108 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad326010-c6dc-47ef-b683-2198123656db-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ad326010-c6dc-47ef-b683-2198123656db" (UID: "ad326010-c6dc-47ef-b683-2198123656db"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.206011 4697 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad326010-c6dc-47ef-b683-2198123656db-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.546175 4697 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad326010-c6dc-47ef-b683-2198123656db-kube-api-access-5lkqd" (OuterVolumeSpecName: "kube-api-access-5lkqd") pod "ad326010-c6dc-47ef-b683-2198123656db" (UID: "ad326010-c6dc-47ef-b683-2198123656db"). InnerVolumeSpecName "kube-api-access-5lkqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.549673 4697 patch_prober.go:28] interesting pod/machine-config-daemon-mb5j7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.549726 4697 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.549767 4697 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.550275 4697 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b"} pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.550328 4697 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerName="machine-config-daemon" containerID="cri-o://18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" gracePeriod=600 Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.556638 4697 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-48zgd_must-gather-5tn4d_ad326010-c6dc-47ef-b683-2198123656db/copy/0.log" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.557044 4697 generic.go:334] "Generic (PLEG): container finished" podID="ad326010-c6dc-47ef-b683-2198123656db" containerID="a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d" exitCode=143 Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.557112 4697 scope.go:117] "RemoveContainer" containerID="a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.557218 4697 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-48zgd/must-gather-5tn4d" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.610114 4697 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lkqd\" (UniqueName: \"kubernetes.io/projected/ad326010-c6dc-47ef-b683-2198123656db-kube-api-access-5lkqd\") on node \"crc\" DevicePath \"\"" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.616937 4697 scope.go:117] "RemoveContainer" containerID="3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.652973 4697 scope.go:117] "RemoveContainer" containerID="a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d" Jan 26 00:34:36 crc kubenswrapper[4697]: E0126 00:34:36.653872 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d\": container with ID starting with a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d not found: ID does not exist" containerID="a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.653903 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d"} err="failed to get container status \"a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d\": rpc error: code = NotFound desc = could not find container \"a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d\": container with ID starting with a5894f5e5e9b1abe3164e06c1cba77271a7b69defdd51c06cfec740957900d3d not found: ID does not exist" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.653924 4697 scope.go:117] "RemoveContainer" containerID="3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6" Jan 26 00:34:36 crc kubenswrapper[4697]: E0126 00:34:36.654310 4697 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6\": container with ID starting with 3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6 not found: ID does not exist" containerID="3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.654358 4697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6"} err="failed to get container status \"3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6\": rpc error: code = NotFound desc = could not find container \"3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6\": container with ID starting with 3df537182cf762574625869b26af75e251acbb70a596d38737882d61574792a6 not found: ID does not exist" Jan 26 00:34:36 crc kubenswrapper[4697]: I0126 00:34:36.668437 4697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad326010-c6dc-47ef-b683-2198123656db" path="/var/lib/kubelet/pods/ad326010-c6dc-47ef-b683-2198123656db/volumes" Jan 26 00:34:36 crc kubenswrapper[4697]: E0126 00:34:36.675393 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:34:37 crc kubenswrapper[4697]: I0126 00:34:37.569030 4697 generic.go:334] "Generic (PLEG): container finished" podID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" exitCode=0 Jan 26 00:34:37 crc kubenswrapper[4697]: I0126 00:34:37.569123 4697 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" event={"ID":"e2d3adb1-27d5-4fa0-a85e-35000080ac39","Type":"ContainerDied","Data":"18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b"} Jan 26 00:34:37 crc kubenswrapper[4697]: I0126 00:34:37.569210 4697 scope.go:117] "RemoveContainer" containerID="e9651ce93801261f4a6086ed411567351d504713cd692eafdc7b113cf23c118d" Jan 26 00:34:37 crc kubenswrapper[4697]: I0126 00:34:37.569715 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:34:37 crc kubenswrapper[4697]: E0126 00:34:37.570064 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:34:51 crc kubenswrapper[4697]: I0126 00:34:51.660769 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:34:51 crc kubenswrapper[4697]: E0126 00:34:51.661500 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:35:03 crc kubenswrapper[4697]: I0126 00:35:03.659898 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:35:03 crc kubenswrapper[4697]: E0126 00:35:03.660730 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:35:17 crc kubenswrapper[4697]: I0126 00:35:17.660494 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:35:17 crc kubenswrapper[4697]: E0126 00:35:17.661405 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:35:31 crc kubenswrapper[4697]: I0126 00:35:31.660579 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:35:31 crc kubenswrapper[4697]: E0126 00:35:31.661277 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:35:42 crc kubenswrapper[4697]: I0126 00:35:42.660701 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:35:42 crc kubenswrapper[4697]: E0126 00:35:42.661470 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:35:53 crc kubenswrapper[4697]: I0126 00:35:53.660593 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:35:53 crc kubenswrapper[4697]: E0126 00:35:53.661307 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:36:06 crc kubenswrapper[4697]: I0126 00:36:06.661167 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:36:06 crc kubenswrapper[4697]: E0126 00:36:06.662005 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:36:20 crc kubenswrapper[4697]: I0126 00:36:20.661050 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:36:20 crc kubenswrapper[4697]: E0126 00:36:20.661909 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:36:33 crc kubenswrapper[4697]: I0126 00:36:33.660795 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:36:33 crc kubenswrapper[4697]: E0126 00:36:33.661604 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:36:46 crc kubenswrapper[4697]: I0126 00:36:46.660847 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:36:46 crc kubenswrapper[4697]: E0126 00:36:46.661574 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:37:01 crc kubenswrapper[4697]: I0126 00:37:01.660424 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:37:01 crc kubenswrapper[4697]: E0126 00:37:01.661152 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:37:15 crc kubenswrapper[4697]: I0126 00:37:15.661155 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:37:15 crc kubenswrapper[4697]: E0126 00:37:15.661991 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:37:27 crc kubenswrapper[4697]: I0126 00:37:27.660768 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:37:27 crc kubenswrapper[4697]: E0126 00:37:27.661550 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:37:41 crc kubenswrapper[4697]: I0126 00:37:41.660112 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:37:41 crc kubenswrapper[4697]: E0126 00:37:41.660783 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" Jan 26 00:37:56 crc kubenswrapper[4697]: I0126 00:37:56.660956 4697 scope.go:117] "RemoveContainer" containerID="18bba8edaeebc6a67eab29ccdc5275e051e233498caf5e0f020a0df1b4b8606b" Jan 26 00:37:56 crc kubenswrapper[4697]: E0126 00:37:56.661798 4697 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mb5j7_openshift-machine-config-operator(e2d3adb1-27d5-4fa0-a85e-35000080ac39)\"" pod="openshift-machine-config-operator/machine-config-daemon-mb5j7" podUID="e2d3adb1-27d5-4fa0-a85e-35000080ac39" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135533560024453 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135533560017370 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135527557016524 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135527557015474 5ustar corecore